Bots, trolls and fake news

In recent months, following the furore over the activities of ‘Russian bots’ and their impact on the US presidential election, the Brexit referendum and indeed elections in a range of other western nations, Twitter has been taking a proactive approach to the problem, and has suspended more than 70 million fake accounts.

The move followed accusations that social media giants such as Twitter, Facebook, Google and Microsoft have all been doing too little to halt the campaigns of disinformation and the spread of fake news on the internet. All have come under scrutiny from regulators worldwide, and all have been involved in discussions on the responsibilities of social media companies in preventing trolls and hackers from using those platforms as tools for their particular political or social agendas.

In the UK, for example, the House of Commons Digital, Culture, Media and Sport Committee published the Disinformation and ‘fake news’: Interim Report Fifth Report of Session 2017–19 Report in July 2018. This identified the problems as a very real threat to democracy posed by ‘fake news’, described as “created for profit or other gain, disseminated through state-sponsored programmes, or spread through the deliberate distortion of facts, by groups with a particular agenda, including the desire to affect political elections”. The Committee concluded that the world faces “a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views”. See the report here.

Germany has started to address the issue, passing an innovative law in 2017 on the regulation of social media sites with more than two million members. The Netzwerkdurchsetzungsgesetz (NetzDG) demands that companies such as Facebook and Twitter remove posts containing hate speech, fake news or illegal material within 24 hours; failure to do so can result in fines of up to €50 million. Predictably, the legislation has come in for criticism over the issues of free speech and internet censorship, particularly as the social media companies are alleged to be avoiding fines by being somewhat over-zealous in their blocking of controversial content.

Many other countries are also considering new legislation, or have passed laws making the dissemination of false information a criminal offence. In Malaysia, offenders can be faced with a steep fine (up to £88,000) or six years in jail for spreading fake news on social media or mainstream news outlets.

Russia - the country mainly associated with the controversy surrounding the dissemination of propaganda, fake news and interference in the national elections of a range of countries - is also planning its own legislation to address the issue. Under its proposed law, social media networks with more than 100,000 users daily would be accountable for “inaccurate” posts made by users and would have just 24 hours to remove the offending material after being notified of it. Fines could total up to 50 million rubles ($800,000).

In the USA, the debate has largely been overtaken by President Trump’s vociferous campaign against the media - or at least those mainstream and social media sites where he is portrayed in a negative light. Nevertheless, the social media networks have come under increasing scrutiny in Congress, and legislation on the dissemination of fake news and propaganda has been proposed. However, the situation is complicated somewhat by the First Amendment of the United States Constitution, which protects freedom of speech and of the press.

Over in the Middle East, a propaganda war is currently taking place between Qatar and neighbouring states, with fake accounts being deployed to influence viewpoints and political discourse. These accounts are being used to exploit hashtags over a short time period, thus enabling them to be picked up by Twitter’s algorithms.

Recent research undertaken by the BBC (see here) found that many of these troll accounts were focused on increasing the number of their followers, specifically “to project credibility and lure genuine users to follow them”. The Saudi-based account @m6mp3, for example, which claims to expose Qatar's “support for terrorism and corruption”, has more than 41,000 followers; however, when 1,000 of these were analysed it was discovered that 350 of them had never posted a profile or a single tweet.

An assessment of other accounts - including pro-Qatar ones - showed similarly high levels of inactivity among followers. In other words, disinformation is being spread by both sides as part of a well-organised programme to disseminate state propaganda and silence critical voices in the region.

Other researchers have come to the same conclusion. Marc Owen Jones, a Research Fellow at the University of Exeter, claims that half of active Twitter accounts in Saudi Arabia could actually be bots: this includes large networks tweeting up to 100,000 times per day, and creating “propaganda messages that can distort the reality of the discussions going on in the world”. His very interesting study can be accessed here.

Countries in the Middle East have also passed legislation making the dissemination of fake news a criminal offence. In Saudi Arabia, for example, people who spread disinformation that negatively affects public order can be sentenced to up to five years in prison and fined up to SR3 million. In Egypt, the parliament passed a law in July 2018 giving state authorities the power to block social media accounts and also to penalise journalists deemed to be publishing fake news.

Of course, such laws are not applicable to state-sponsored trolls in Russia or the Middle East, or in any of the other countries believed to be engaging in similar campaigns.