Social Media in Moderation! Moderating Social Interaction?

in #writing5 years ago (edited)

moderation.jpg

page divider (orange-squid).png

Social Media in Moderation

This article is a short follow-up to my first article "Fear and Liking on Facebook" . In my previous article I aimed to illustrate some of the issues surrounding our use of social media and how we need to be better informed and aware of the potential drawbacks of its use. It also looked at how the developers of these platforms use algorithmic processes to generate engagement and to target advertising as part of their core business model.

A lot has been highlighted recently about how the use of these algorithms can have the effect of feeding and enhancing the many negative views which are propagated and circulated on social media: in some cases driving extreme viewpoints into the mainstream. In response to this, the major social media companies have had to react; whether it is to combat fake news, hate speech, general misinformation or other non-desirable content on their platforms. This response has been prompted partly by a public backlash at the attitude that they are not responsible for what is posted on their platforms as they are "not publishers" of the content, and partly due to increasing threats of regulation from governments. They have stopped short of taking ownership for the publication of the content (although anything you post on their platforms they own), however they are showing a willingness to combat the negativity so often making the headlines.

Facebook, YouTube and Twitter have announced in recent times that they are sharply increasing resources, both technological and human, to address the problems which can arise through the use of their services. The human element of this increase in resources includes moderators as well as safety and security technicians. Facebook alone has doubled the number of its employees and contractors engaged in safety and moderation roles to close to 20 000 individuals of which around 10 000 are content reviewers. YouTube has followed suit in employing nearly 10 000 of its own content moderators. Twitter is relying more on moderation through sophisticated Artificial Intelligence programs to identify fake accounts and some inappropriate content. In the main Twitter still relies heavily on reports of bullying and harassment due to the nature of the interaction on its platform. Google has also strengthened its evaluation teams and issued guidance on search quality to these teams: a document which runs to 164 pages!

This change in emphasis has been driven by senior management at the companies recognising problems and their need to be socially responsible. Facebook now hold executive meetings every two weeks called the "content standards forum" to discuss issues and implement policy changes or solutions. This undertaking from executive management demonstrates that the penny has dropped and they are beginning to take matters seriously. YouTube is also implementing similar meetings and leading this is Susan Wojcicki (C.E.O of You Tube) who chairs a meeting with her senior managers every Friday.

So with recognition that the problems can no longer be dealt with as periphery issues and that there is a growing need to remove the toxicity from their platforms, practical steps are being taken by the social media companies.

page divider (orange-squid).png

gtf.jpg

When moderation and censorship collide

With the new executive intervention and the increase in moderators, content evaluators, safety and security technicians, direct measurable effects are being seen. In Quarter 1 of this year Facebook removed or took action against 7.8 million pieces of content that included graphic violence, hate speech or terrorist propaganda. This was in addition to action taken on nearly 30 million other posts, videos, images or comments. Accordingly, how do platforms increase interventions and moderate effectively without possibly becoming a defacto "Ministry of Truth"? The line between moderation and censorship is a fine one and often blurred. For example, it is Facebook's policy on fake news not to remove it just because it is false. Facebook will down-rank identified fake news and flag it as such for its users. This means that fewer people will see it and alongside this they will post fact checked and verified information next to it. As such the idea of free speech is protected whilst providing a check against the harm which may be caused by the fake news.

Moderation teams, however, are not always great at distinguishing between fact and fiction or at looking at content in terms of its context. This is understandable given the fact that the teams are working around the clock, at sites around the globe, and having images and text flashing at them with only a few seconds to decide whether what they are seeing constitutes a violation of their guidelines. These guidelines, from which they have to work, are also part of the challenges facing moderators. They stem from the decisions made by the executive committees which are filtered down through the organisation as policies, instructions, are then combined into manuals that often become ridged sets of rules into which the platforms terms and conditions of use are also intertwined.

For moderation to be effective and uphold the principle of freedom of speech, it needs intelligence and consideration. How does a person moderate effectively in relation to sartorial comment, comedy or straightforward robust debate? This is particularly difficult if the moderator themselves is of an opinion opposed to the one being selected for moderation. Also, at what point does content become inappropriate? If a comment or post is contentious but not specifically targeted to a race, religion or any other specified group should it be removed? These questions are becoming ever more important as there are now 2.2 billion active monthly users of Facebook and 1.9 billion monthly users of YouTube. Facebook itself made a major step forward in April of this year (2018) when it issued the 'Community Standards' document which clarified the rules on what people can and cannot post on its platform. It also introduced a limited appeals process where when something has been removed it can be reinstituted after a second look at it by the moderation teams. This has been particularly useful when both automated and human moderation has removed traditional posts and images containing 'art'. Facebook's moderation teams are left at times with the dubious task of defining what precisely constitutes 'art', as well as which forms of art are permissible and to what extent. Art is often provocative, and at times explicit, and culturally often does not translate across borders or religions leaving moderators in an even more difficult position as to what to allow or remove.

Facebook and the other social media platforms are central hubs for social interaction and debate. This ranges from cat videos all the way through to conspiracy theories and as all of this content becomes ever more channelled through human and A.I. moderation (as is the public's demand) we now have a situation where never before in human history, have such small groups of people been able to control what billions of people can see and say.

Google were recently put under the spotlight by a Wall Street Journal investigation uncovering emails between senior employees discussing ways in which Googles search engine could counter President Trump's travel ban by altering its search results. The discussions centred on how the search engine could prioritise pro-immigration sites and show users how to contribute to pro-immigration organisations and even how to contact government agencies to protest against the policy. There were also proposals on ways in which to "leverage" (a way to skew and bias the results of a search) to counter "Islamophobic, algorithmically based results" from search terms such as 'Islam', 'Muslim', 'Iran' etc. They also looked at positively leveraging results for search terms like 'Hispanic', 'Mexico' and 'Latina'. Google later issued a statement saying that their "Policies and procedures would not have allowed for any manipulation of the search results". A strong statement, but the fact that they had to issue it at all is worrying. Even if you think that the sentiment behind positive leveraging is a good idea it would still amount to censorship and unseen manipulation.

Social media companies have always seen themselves as disruptors, proponents of free speech on the internet. Google itself has been one of the biggest champions of free speech on the internet in recent times. In November of 2017 Google challenged the European Court of Human Rights (ECHR) over the ruling for a person's "right to be forgotten". In challenging this, senior lawyer for Google, Kent Walker stated that the right to be forgotten would " effectively erase the public's right to know important information about people who represent them in society or provide them with services". This is now often referred to by Google as one of its main principles behind freedom of information and free speech. However, Google has recently has been found to be duplicitous in its championing of free speech and information on line, with the discovery of the development of Google China's search engine. The development of this search engine is to produce one which is pre-censored in order to satisfy the Chinese government's requirements. They have given the development of this search engine the suitably spy-like name "project dragonfly". So when even the 'so called' standard bearers of free speech online are developing background censorship we should all take note.

The majority of people would probably agree that moderation is needed (and in some cases required) on social media as for more and more people it is becoming an everyday activity. We need to be careful (as an offline society and online community) and become aware of the power of moderation and the potential effects it can have on freedoms such as freedom of speech and freedom of information.

ministry-of-truth.jpg

page divider (orange-squid).png

Note: This article was sent by post (yup, the coach and horses stuff!) to @barge, who OCR'ed and then uploaded it. This introductory post by @themightysquid may help to clarify (a little) why it is necessarily so. Comments are most welcome, but TMS won't be able to respond immediately, as any correspondence will take place by snail-mail. Graphic interpretation of this post is by @barge. Thanks for reading!

Sort:  

Congratulations @themightysquid! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You made more than 1000 upvotes. Your next target is to reach 1250 upvotes.

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @steemitboard:

Be ready for the next contest!

Support SteemitBoard's project! Vote for its witness and get one more award!

Congratulations @themightysquid! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You made more than 1250 upvotes. Your next target is to reach 1500 upvotes.

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @steemitboard:

Meet the Steemians Contest - The results, the winners and the prizes
Meet the Steemians Contest - Special attendees revealed

Support SteemitBoard's project! Vote for its witness and get one more award!

Congratulations @themightysquid! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You made more than 1500 upvotes. Your next target is to reach 1750 upvotes.

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

Support SteemitBoard's project! Vote for its witness and get one more award!

I upvoted your post.

Keep steeming for a better tomorrow.
@Acknowledgement - God Bless

Posted using https://Steeming.com condenser site.