Monday 12 November 2018

The Weaponization of Social Media

 



The use of ‘bots’ present modern society with a significant dilemma; The technologies and social media platforms (such as Twitter and Facebook) that once promised to enhance democracy are now increasingly being used to undermine it. Writers Peter W Singer and Emerson Brooking believe ‘the rise of social media and the Internet has become a modern-day battlefield where information itself is weaponised’. To them ‘the online world is now just as indispensable to governments, militaries, activists, and spies at it is to advertisers and shoppers’. They argue this is a new form of warfare which they call ‘LikeWar’. The terrain of LikeWar is social media; ‘it’s platforms are not designed to reward morality or veracity but virality.’ The ‘system rewards clicks, interactions, engagement and immersion time…figure out how to make something go viral, and you can overwhelm even the truth itself.’
In its most simple form the word ‘bot’ is short for ‘robot’; beyond that, there is significant complexity. There are different types of bots. For example, there are ‘chatbots’ such as Siri and Amazon’s Alexa; they recognise human voice and speech and help us with our daily tasks and requests for information. There are search engine style ‘web bots’ and ‘spambots’. There are also ‘sockpuppets’ or ‘trolls’; these are often fake identities used to interact with ordinary users on social networks. There are ‘social bots’; these can assume a fabricated identity and can spread malicious links or advertisements. There are also ‘hybrid bots’ that combine automation with human input and are often referred to as ‘cyborgs’. Some bots are harmless; some more malicious, some can be both.
The country that is perhaps most advanced in this new form of warfare and political influence is Russia. According to Peter Singer and Emerson Brooking ‘Russian bots more than simply meddled in the 2016 U.S. presidential election…they used a mix of old-school information operations and new digital marketing techniques to spark real-world protests, steer multiple U.S. news cycles, and influence voters in one of the closest elections in modern history. Using solely online means, they infiltrated U.S. political communities so completely that flesh-and-blood American voters soon began to repeat scripts written in St. Petersburg and still think them their own’. Internationally, these ‘Russian information offensives have stirred anti-NATO sentiments in Germany by inventing atrocities out of thin air; laid the pretext for potential invasions of Estonia, Latvia, and Lithuania by fuelling the political antipathy of ethnic Russian minorities; and done the same for the very real invasion of Ukraine. And these are just the operations we know about.’
We witnessed similar influence operations here during the Brexit referendum in 2016. A study by the Financial Times reported that during the referendum campaign ‘the 20 most prolific accounts … displayed indications of high levels of automation’. The Anti-Muslim hate group TellMAMA recorded in its latest Annual report that manual bots based in St Petersburg were active in spreading Anti-Muslim hate online. Israel has also used manual ‘bots’ to promote a more positive image of itself online.
The Oxford Internet Institute (OII) has studied online political discussions relating to several countries on social media platforms such as Twitter and Facebook. It claims that in all the elections, political crises and national security-related discussions it examined, there was not one instance where social media opinion had not been manipulated by what they call ‘computational propaganda’. For them, while it remains difficult to quantify the impact bots have ‘computational propaganda’ is now one of the most ‘powerful tools against democracy’.
Donald Trump perhaps more than any other US President to date understands the power of social media. The OII found, for example, that although he alienated Latino voters on the campaign trail, he had some fake Latino twitter bots tweeting support for him. Emerson T Brooker informed me that social media bots can be highly-effective; for him ‘If a bot-driven conversation successfully enters the “Trending” charts of a service like Twitter, it can break into mainstream discussion and receive a great deal of attention from real flesh-and-blood users’. He continues ‘The first unequivocal use of political bots was in the 2010 Special Senate Election in Massachusetts, which ended in the election of Senator Scott Brown. The bots helped draw journalist (and donor) interest from across the country. The Islamic State was also a very effective user of botnets to spread its propaganda over Arabic-speaking Twitter. In 2014, it repeatedly drove hashtags related to its latest execution or battlefield victory (e.g. #AllEyesOnISIS) to international attention.’
So, what can be done to better regulate bots? The OII has called for social media platforms to act against bots and has suggested some steps. These include; making the posts they select for news feeds more ‘random’, so users don’t only see likeminded opinions. News feeds could be provided with a trustworthiness score; audits could be carried out of the algorithms they use to decide which posts to promote. However, the OII also cautions not to over-regulate the platforms to suppress political conversation altogether.  Marc Owen Jones of Exeter University who has researched bots feels that in the case of twitter better ‘verification procedures could tackle the bots’. According to Emerson Brooking ‘a simple non-invasive proposal bouncing around Congress now would mandate the labelling of bot accounts. This would allow bots positive automation functions to continue while keeping them from fooling everyday media users.’

https://www.counterpunch.org/2018/11/09/the-weaponization-of-social-media/

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home