Disinformation on social media is the new arms race – meet the start-up using AI to fight it

As the UK gears up for a potential election, there are concerns over the role disinformation will play in democracy 
Amelia Heathman17 October 2019

By now, it’s clear that bad actors were doing all they could to spread disinformation and fake news in order to affect democratic proceedings during the 2016 Brexit referendum and US presidential campaign.

Just this week, the US Senate released a second report confirming that the Russian Internet Research Agency sought to influence the 2016 election in favour of Trump “at the direction of the Kremlin”. The report states that the agency used Facebook pages, Instagram accounts and Twitter trends to target specific groups and divide the country.

This isn't an issue only affecting the US; just this week, the Oxford Technology and Elections Commission published a report saying that Britain needs to take immediate action to reduce the risk of malicious actors online ahead of the looming general election.

London start-up Astroscreen wants to help governments and companies deal with the issues of disinformation on social media. Founded by Ali Tehrani, the company uses artificial intelligence to seek out fake accounts online that are spreading disinformation, before deploying analysts to get to the route of the problem. It counts UK government agencies and brands amongst its clients, all seeking to bring truth to the online discourse.

Here’s how it works.

Astroturfing and disinformation online

The start-up derives its name from “astroturfing” – a political term which describes the fake support for a particular movement, the opposite of grassroots essentially. Coined by a Texan senator in 1985, astroturfing has been used by tobacco companies pretending to be citizens campaigning against plain cigarette packaging and gas companies spoofing Al Gore’s ‘An Inconvenient Truth’.

Astroscreen aims to dig out these fake political movements and expose their true motives. “It’s happened for a long time, but social networks have allowed it to happen at scale,” explains Tehrani.

Tehrani has been interested in the way social networks work for a while. In 2013, when he was studying economics at UCL, he experimented with creating a recommendation based on Facebook likes that would recommend music, books and more depending on a user’s preferences, sort of a non-malicious Cambridge Analytica. A second venture in 2015 was a news analytics and monitoring company, classifying interests of journalists and seeing how news stories were shared on social media. This company was sold a few months before Trump was elected but at the time, it was clear to Tehrani that if something was polarising or controversial, it was being amplified on social networks.

Why is this? “Social networks are critical communications infrastructure but they weren’t built for that purpose,” he explains. “They have become how people communicate and consume news, follow politicians, and they are really easy to manipulate.”

Part of this comes down to the fact the platforms are focused on engagement. “If you’re trying to optimise for engagement, what piece of content is the most engaging? Controversial, bias, polarising.”

How can Astroscreen tackle disinformation online?

The start-up uses a series of AI models to monitor topics online and detect suspicious activity. For instance, Brexit. “Say I wanted to create 1,000 accounts to spam a topic like Brexit. It’s difficult to come up with 1,000 unique usernames on Twitter so I might create an algorithm to create usernames using this pattern,” says Tehrani. “We have a model that detects username similarities, that is highly suspicious.”

Other indicators of fake accounts include account correlation, when 30 accounts are tweeting at the same time for instance, or linguistic fingerprinting. “We can tell if one person is behind multiple Twitter accounts based on the way they write, not just what they’re saying.”

Then, the company will deploy an analyst who investigates the topic and finds the context: who is behind the campaign, what messages are they trying to spread and what is the motive.

The idea is that governments and companies will use Astroscreen’s services and then they can mitigate the effects.

In September, Astroscreen uncovered a network of fake accounts tweeting in support of the Hong Kong police and against the protestors as part of a Chinese disinformation campaign. The report showed at least 2,690 fake accounts, which had either zero followers or up to five, using specific hashtags including #guardinghongkong and #nationalsupportforthepolice. Tehrani believes these accounts have been created to promote the Hong Kong police and discredit the protestors.

Hong Kong protest latest in pictures

1/27

But it’s not just political campaigns that are at risk of manipulation. The #BoycottNike hashtag that spread last year following the company’s new ad featuring NFL star and Trump protestor Colin Kapernick was also infiltrated by alt-right bots. “We came across bots pushing the hashtag. They didn’t start it but they were amplifying it.”

“We say to our clients, ‘You never know’. One of your executives could say something that’s sexist or negative, and of course, there’s going to be outrage. But once that hashtag starts trending, it’s no longer an inauthentic thing. Inauthentic outrage can become authentic outrage."

Tehrani says what sets Astroscreen apart from the tech platforms that simply remove the bot accounts comes down to its analysts. “They basically hire a bunch of engineers to build algorithms to detect fake accounts at scale. But that only gets the low-hanging fruit. To detect campaigns for every brand, every topic, every election, you need analysts. And that’s just not scaleable for Twitter or Facebook.”

To future elections

It’s not easy to catch out these bad actors at their own game. Tehrani describes it as a “cat and mouse game”, trying to find topics and detecting the patterns that point to disinformation and manipulation. “We’re currently working with a few government agencies for the likely election. Hopefully there’s nothing to find, but if there is, we’ll find it.”

The start-up recently joined CyLon, one of the world’s best cyber security accelerator programmes, based in Hammersmith as well as TechNation’s Applied AI growth programme, part of the UK government’s AI and Data Sector deal to offer support to AI companies as they scale. After raising $1 million (£795,850) in seed funding, including from UCL’s Technology Fund, to tackle the issues.

“It really is an arms race. Maybe the Russians were first but now every country has a disinformation strategy and every troll is going to try to imitate the campaigns,” he says. “It’s going to get worse over time but they’ll mess up somewhere and then we’ll catch them.”