In social media ‘arms race,’ tech giants chase evolving trolls and bots

Leon Neal/Getty Images(NEW YORK) — In Darwinian fashion, some online trolls and bots have evolved into more sophisticated, stealthy threats that social media companies and security researchers are finding more difficult to spot, according to a top Facebook security official and several information operations experts.

While algorithms and human investigators catch millions of inauthentic social media accounts every day, more sophisticated actors constantly alter their schemes in an attempt to duck the newest security innovations.

It’s a cat-and-mouse game playing out behind our digital screens that information operations expert Renee DiResta, director of research at the Austin, Texas-based cybersecurity firm New Knowledge, doesn’t see stopping anytime soon.

“So now the people at the first line of defense are the people at the tech platforms and what they’re going to find is as they change the state of play, as they change the rules a little bit, the adversary is going to evolve and respond,” said DiResta. “This is going to be an arms race that’s going to play out kind of for the indefinite future.”

Ben Nimmo, an information defense fellow at the Atlantic Council’s Digital Forensic Research Lab, said that two years ago Russian trolls, in particular, on social media were “getting away with everything… but those days are really over. So you’ve got trolls really trying to hide.”

Facebook cybersecurity policy chief Nathaniel Gleicher said Facebook, which has partnered with the Atlantic Council to help identify disinformation, has seen trolls use virtual private networks, or VPNs, which help obscure their physical location, or link accounts to international cell phone numbers that better match the account’s purported location.

“The longer an actor has been in a space, the more sophisticated they get,” Gleicher said of Russian trolls in particular. Though other trolls are also evolving, researchers have paid particular attention to those suspected to come from Russia in the wake of revelations about the purported Moscow-directed online influence operation ahead of the 2016 presidential election.

Suspected Russian-linked accounts have also put more effort into appearing legitimate by creating more involved backstories for the “different personas they use,” according to Lee Foster, manager of information operations intelligence analysis at the cybersecurity firm FireEye.

Foster said suspected trolls appear to be trying to disguise the spread of propaganda through proxies, amplifying “legitimate” American commentators who make statements in line with the trolls’ goals. Gleicher said it works the opposite way as well, when foreign inauthentic accounts “seed” memes or discussion points online and just wait for real Americans to pick them up and spread them on their own.

In court papers filed on Friday, for example, the Department of Justice described how an alleged Russian troll disguised as an American created a Facebook page to sow divisiveness and then encouraged real American users to populate the page and spread the content, which focused on the hot-button issue of immigration. The DOJ also said Russian trolls on Twitter pretended to be Americans and posted or retweeted divisive content, gaining thousands of followers – many real Americans presumably among them — in the process. The Russian government has dismissed the influence operation accusations.

The intentional mixing of legitimate speech online and foreign influence operations presents a new challenge for Facebook, and Gleicher said the company is fighting back by either targeting the inauthentic behavior or the misinformation content itself, depending on the situation.

But at the end of the day, he said, “How do you disentangle that sort of thing?”

On Twitter, where automated accounts known as bots are more of an issue than on Facebook, Nimmo has seen a marked difference in the way bot networks operate compared to how they operated before the Twitter crackdown that followed the 2016 election, as shown in a mountain of data recently released by the social media giant.

Previously, networks of automated accounts would reuse profile photos, hardly bother to meaningfully change the profile names and rapidly post with inhuman timing – similar to a firearms-themed suspected bot network that ABC News discovered in August that was quickly taken down.

Now, the networks tend to be smaller, Nimmo said, and the accounts are more realistic-looking with posts that appear to be human-like, all in the hopes of ducking Twitter’s detection algorithms.

Twitter’s head of site integrity, Yoel Roth, told ABC News in a statement, “Since 2016, we’ve been learning and refining our approach to new threats and challenges. We’ve expanded our policies, built our internal tooling, and tightened our enforcement against coordinated platform manipulation, including bot networks – regardless of origin.”

“Our goal is to try and stay one step ahead in the face of new challenges going forward,” he said.

Adam Meyers, vice president of intelligence at the cybersecurity firm CrowdStrike, said trolls and bots are continuously evolving, but said it’s difficult to pinpoint specific behavioral change since they’ve long been seen in different “levels” of sophistication – from bespoke personas with rich fake histories to farm-bought, barely backstopped accounts. While the farm-bought accounts might be easier to identify and kill, the well-constructed personas will always been harder to catch.

Darren Linvill, a Clemson University professor who with his colleague Patrick Warren previously analyzed some three million tweets linked the alleged Russian operation, previously told ABC News it’s imperative that researchers are able to analyze the tactics of trolls and bot-makers as soon as possible, because they change so quickly.

“The way they [Russian trolls] operated in 2016, 2017, looks nothing like they’re doing now,” he said.

Copyright © 2018, ABC Radio. All rights reserved.