Following President Trump’s increasingly hard-line stance on China, numerous fake Chinese accounts have taken shots at the president over a number of issues on social media, including the coronavirus pandemic, according to newly published research.
The research, put together by Graphika, details the actions taken by a network dubbed “Spamoflauge Dragon,” a pro-Chinese spam network. Prior to being taken down by Google-owned YouTube, Facebook and Twitter, the network of spammers — some which were created this year — appeared to “operate in a dispersed model,” the research firm said.
“The latest wave of Spamouflage activity differs in two key ways from its predecessors,” researchers wrote. “First, it includes a wealth of videos in English and targets the United States, especially its foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok. This is the first time the network has published substantial volumes of English-language content alongside its ongoing Chinese coverage–a clear expansion of its scope.”
The accounts used “heavy” amounts of video footage from pro-Chinese government channels and along with memes, published in both Chinese and English, attacked the president.
A person familiar with Twitter’s thinking told 360aproko News the Jack Dorsey-led company suspended a number of the accounts Graphika mentioned in the report prior to being notified by the research firm.
Trump, who is running against Democratic presidential candidate Joe Biden, has ordered the closure of the Chinese consulate in Houston and signed executive orders aimed at banning WeChat and TikTok, two popular Chinese-grown apps, in recent weeks.
In response, the Chinese government ordered the closure of the U.S. consulate in Chengdu. In addition, China’s foreign ministry said it opposed the executive orders and will defend the rights of Chinese businesses, according to ministry spokesman Wang Wenbin.
“The U.S. is using national security as an excuse and using state power to oppress non-American businesses,” Wenbin told reporters, according to Reuters. “That’s just a hegemonic practice. China is firmly opposed to that.”
The network is considered “technologically advanced,” according to the Washington Post, using artificial intelligence to create fake faces and videos produced at one per day. Some of the videos produced included those putting Biden in a flattering light and predicting Trump would lose the election in November.
One particular video, posted to YouTube, was entitled “When I voted for Trump, I almost sentenced myself to death.”
A YouTube spokesperson told 360aproko News that Google’s Threat Analysis Group (TAG) is responsible for handling these types of attacks.
TAG, which tracks more than 270 “targeted or government-backed attacker groups” from more than 50 countries, including China and Russia, terminated 186 YouTube channels recently after an investigation into coordinated influence operations linked to China.
AI CAN’T OFFER PROTECTION FROM ‘DEEPFAKES,’ NEW REPORT SAYS
“These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19,” TAG wrote in a post published last week.
In July, TAG sent more than 1,700 warnings to users whose accounts were targeted by government-backed attackers.
A Twitter spokesperson told 360aproko News that it discloses the threats as soon as it learns of them and welcomes the support from external stakeholders.
“We welcome the chance to collaborate with external stakeholders, including Graphika on identifying and taking action on attempts to manipulate the conversation on Twitter,” the spokesperson said in an email. “As we shared during a recent series of workshops on information operations with the Carnegie PCIO, we rely on partnerships to do this work, and are committed to growing the public, shared understanding of this issue. When we identify information operation campaigns that we can reliably attribute to state-backed activity, we disclose them to the public.”
According to a person familiar with Facebook’s thinking, the company removed a network that used “spammy tactics” of Chinese-centric content in September 2019, the same time frame Graphika detected Spamouflage, for violating the social network’s community standards.
The person added the content didn’t get a lot of engagement from users on the company’s platform.
Since September, Facebook has continued to work with other tech companies and researchers to find and remove the networks from their platforms.
Graphika, which first identified the network in September 2019 according to their report, noticed there were consistent errors in the videos. These include an inability to grasp spoken English, including examples such as mixing up “us” for “U.S.” or confusing commonly-used phrases, the Post added.
Graphika has not yet been able to determine if the group had a relationship with the Chinese government, though has noticed changes in the networks’ tone, compared to that of the Chinese government.
“In March, as the Chinese government’s narrative shifted to arguing that China had responded better than the United States, it tweeted about the reported wave of xenophobic attacks on Chinese Americans linked to the outbreak,” the researchers wrote in a separate report, published in April.
While the network has an ability to produce a video to a specific topic within 36 hours, which Graphika said “indicates a degree of tactical flexibility and reactiveness and suggests a relatively rapid production cycle,” they have not gone viral yet.
“Very few” of the videos have reached more than 100 views, with some garnering a “handful” of views, while others had none at all.