Astroscreen raises $1m to detect social manipulation with machine learning


Astroscreen raises $1m to detect social media manipulation

UK-based start-up Astroscreen has secured $1M in initial funding to progress its pioneering technology which identifies carriers of disinformation on social media. Techniques include coordinated activity detection, linguistic fingerprinting and fake account and botnet detection.

The funding round was led by Speedinvest, UCL Technology Fund, which is managed by AlbionVC in collaboration with UCLB, Luminous Ventures, AISeed and the London Co-investment Fund. Social networks are now critical to how the public consumes and shares news. But they were built to reward virality, which makes them easy to manipulate and therefore to be weaponised on a global scale for commercial and political gain. Electoral interference is perhaps the best-known example, where foreign intelligence agencies are accused of using fake accounts and bots to meddle with the political process and erode trust in democracy. However, commercial brands are just as likely to be targeted, suffering powerful adverse effects. This is the focus of Astroscreen.

At the heart of disinformation attacks lie fake social media accounts – bots (automated) and ‘sock-puppets’ (human-run). These networks of bots and sockpuppets can be used in a highly organised way to spread and amplify minor controversies or fabricated and misleading content. Once an attack gains steam, it is reproduced by genuine users, influencers and then bona fide news organisations. As well as being used in politics they are increasingly being deployed in commercial assaults, already attacking global brands ranging from Nike and Starbucks to pharmaceutical giants.

Astroscreen CEO Ali Tehrani previously founded a machine-learning news analytics company which he sold in 2015, before fake news gained widespread attention.

Tehrani said: "While I was building my previous start-up I saw at first-hand how biased, polarising news articles were shared and artificially amplified by vast numbers of fake accounts. This gave the stories high levels of exposure and authenticity they wouldn't have had on their own.

"The use of such disinformation to discredit brands has the potential for very costly and damaging disruption when up to 60% of a company's market value can lie in its brands."

CTO Juan Echeverria, whose PhD at UCL was on fake account detection on social networks, made headlines in January 2017 (2) with the discovery of a massive botnet managing some 350,000 separate accounts on Twitter.

Echeverria said: "Social media platforms are saturated with fake accounts and botnets and are losing this cat-and-mouse game because botnet makers are continuously finding new ways of avoiding detection. As they incorporate conversational AI and deepfakes, these botnets will get more sophisticated by the day."

Ali Tehrani concluded: “Social media platforms themselves cannot solve this problem because they’re looking for scalable solutions to maintain their software margins. If they devoted sufficient resources, their profits would look more like a newspaper publisher than a tech company. So, they’re focused on detecting collective anomalies – accounts and behaviour that deviate from the norm for their userbase as a whole. But this is only good at detecting spam accounts and highly automated behaviour, not the sophisticated techniques of disinformation campaigns.”

“Astroscreen takes a wholly different approach, combining machine-learning and human intelligence to detect contextual (instead of collective) anomalies – behaviour that deviates from the norm for a specific topic. Taking Brexit as an example, the inauthentic Twitter accounts that contributed to the conversation were only inauthentic in the context Brexit, and went undetected by Twitter’s scalable spam detectors. Our technology monitors social networks for signs of disinformation attacks, informing brands if they're under attack at the earliest stages and giving them enough time to mitigate the negative effects."