Internet Trolling in 2015 was “the year angry won the internet,” 2016 could well be the year the internet fought back and Facebook needs to look to their laurels and be more aware of the need to prote ct their existing pagers and stop trying to be the biggest – @AceTweetNews

Standard

#AceNewsDesk – July.16: Can Facebook flush out the media trolls in the war on hate?

If 2015 was “the year angry won the internet,” 2016 could well be the year the internet fought back.

In 2015, feminist writer Clementine Ford was subjected to a surge of online abuse; dubbed a “whore” and a “bitch who should kill herself.” Trolls said she needed to die, to be “shot in the face” and gang-raped. What was her crime? She had reported a comment from an employee of Meriton Apartments calling her a “slut” on Facebook, and as a consequence he was fired.

2015 was “the year angry won the internet.” As online hate speech spirals out of control, 2016 could well be the year the internet fought back.

That same year, Germany took a stand on anti-refugee Facebook hate speech, and top publications began to silence the haters by removing the comments sections beneath their articles. The BBC concluded, 2015 was “the year angry won the internet.” As online hate speech spirals out of control, in contrast 2016 could well be the year the internet fought back.

The biggest names in tech —Facebook, Google, Twitter, YouTube and Microsoft — have vowed to clean up community hate speech in less than 24 hours of it appearing, in accordance with a new EU code of conduct. Some say this is censorship, but there is real danger attached to the facilitation of online trolling, and recent terrorist activities have shone a spotlight on this. So, what exactly does Facebook have on its hands, and how can it begin the mammoth task of cleaning this up?

The supposed demise of the comments section

A 2014 survey conducted by the Associated Press revealed that 70 percent of online publishers valued the comments sections that follow articles online. These tools ignite conversion, they allow for an exchange of ideas, a deeper engagement and drive increased traffic to media sites.

Continued abuse of this privilege, from those acting under the guise of anonymity, has been seen in the torrent of racist or xenophobic language and personal attacks. This hate speech may be directed at writers, subjects or other members of the community.

Many top publications, including Recode, chose to kill their comments in order to avoid moderating the growing mass of hate speech.

U.K. news publication The Guardian analyzed more than 70 million comments from the last decade. It highlighted the positives that can be achieved through online comments: Providing instant feedback, “asking questions, pointing out errors, giving new leads,” a tool that serves to “enrich the Guardian’s journalism.” However, the “dark side of comments” revealed a huge amount of abuse, with 1.4 million comments blocked. Exploration of this hate speech furthermore revealed that eight of the 10 most-abused writers were women, and the other two were black, despite these writers forming a minority of the editorial staff.

Chicago Sun Times managing editor Craig Newman described the issue of “a morass of negativity, racism, hate speech and general trollish behaviors that detract from the content,” explaining his decision to temporarily remove the comments section from the publication.

Many others have followed suit, choosing to kill the comments in order to avoid moderating the growing mass of hate speech. Top publications that have rejected comments include Reuters, Recode, The Week, Bloomberg, The Verge, The Daily Dot, The Daily Beast and Vice’s Motherboard, to name just a few.

Reuters’ executive editor told readers that the news company was moving the discussion to social media: “Those communities offer vibrant conversation and, importantly, are self-policed by participants to keep on the fringes those who would abuse the privilege of commenting.”

Social media giants battle hate speech

Unfortunately, any dreams of a self-moderated social media community, free of online trolls, were not to be. On the contrary, these forums have become a breeding ground for racial slurs, misogynistic language and personal attacks.

Twitter sees an average of 480,000 racist tweets a month (compared to 10,000 only three years ago). “We suck at dealing with abuse,” said Twitter’s former CEO Dick Costolo. Once again, however, Facebook leads the race, with a whopping 1 million user violation reports every day. So what type of threatening behavior are we seeing, and how is this connected to the news?

The hate speech examined in a recent study focused on nationality (33 percent), religion (31 percent), race (18 percent), sexual orientation (9 percent), gender (6 percent) and political views (4 percent).

My company, BrandBastion, conducted a study measuring the amount and type of social media threats in 40,000 comments, from 10 of the most-engaged news publishers on Facebook: ABC News, CBS News, Sky News, NBC News, CNN, Time, The Washington Post, The Guardian, The Wall Street Journal and USA Today.

We found that one in 14 comments contained a social media threat. Some 31 percent of these threats were identified as extremely aggressive “defamatory language, profanity and online bullying.” A further 20 percent were classified as “hate speech,” attacking a person or group based on specific attributes.

When exploring the topics that generated the largest proportion of hate speech, we found articles around the elections incited the most anger. Highly offensive attacks on Melania Trump, calling her “ugly,” “fake” and “nasty” rapidly escalated to graphically lewd comments and racial battles between commenters. Overall, the hate speech we discovered focused on nationality (33 percent), religion (31 percent), race (18 percent), sexual orientation (9 percent), gender (6 percent) and political views (4 percent).

As the elections heat up, sites like Facebook are going to have their hands full monitoring and controlling this spread of offensive commenting. With his latest pledge to quash the hatred, all eyes are on Mark Zuckerberg to manage this torrent of abusive behavior.

Community solutions to counter the offensive

Where status updates and selfies once dominated, Facebook today has become a portal for the news. According to traffic-analytics service Parse.ly, social media drives 43 percent of traffic to media sites. Facebook is unquestionably the largest source, and has overtaken Google referral traffic, which accounts for just 38 percent.

What’s more, with advanced tools such as Facebook’s Instant Articles — now officially rolled out to all publishers — article consumption is likely to stay within the Facebook domain. This all means Facebook has a huge power over how we consume the news and its connected comments, making its next steps all the more crucial.

Gigaom writer Mathew Ingram claims the move of news discussion onto sites like Facebook knowingly hands off the responsibility of moderating content to social media platforms. But it also means digital publications pass up on the “value of engagement” that comments bring.

How does the social networking Goliath intend to remove all hate speech within 24 hours, in line with the latest EU code of conduct?

So how does the social networking Goliath intend to remove all hate speech within 24 hours, in line with the latest EU code of conduct? The IT companies have all agreed to put in place notification processes, reviewing these against community guidelines removing or disabling content within 24 hours. They have pledged to educate users, training staff and sharing procedural information with authorities and intensifying cooperation between the giants of tech.

This year Facebook has been involved with a number of initiatives, backing a campaign against misogyny, and launching the Online Civil Courage Initiative in January to empower users to fight extremist abuse.

Some media platforms rely wholly on user moderation, self-censorship or a members-only commenting model. After feminist site Jezebel suffered an epidemic of rape GIFs filling the comments section, it brought back the “pending comment system,” meaning only comments from approved commenters are visible, all others go into a pending queue, which only shows if readers choose to allow them.

Former Reddit product executive Dan McComas cofounded Imzy, a social platform set on eliminating hate speech by only allowing registered members to comment in the forums. SolidOpinion.com has another strategy, limiting commenting ability to only paying members. This approach has attracted customers such as Tribune Publishing, owner of the Chicago Tribune and the Los Angeles Times, controversially putting a price on freedom of speech. Another startup, Civil Comments, works on the basis that users rate randomly chosen comments to classify acceptable material, and power flags for offensive content.

Artificial intelligence to aid moderation

Google CEO Eric Schmidt has called on the tech community to create “spell-checkers, but for hate and harassment,” in efforts to counter online terrorism. Applying this intelligence to article comments would be a natural progression of this. The Guardian recently reported tactics to “weed out the trolls,” concluding that moderation is necessary, through human decisioning backed by “smart tools.” However, this all requires an internationally agreed-upon definition of hate speech, and a system that is able to decipher context and also links to external sites.

As trolls become the internet norm, the media world is pulling out the big guns to overthrow them, arming A.I. with contextual understanding and advanced intelligence, and empowering communities to fight back.

Facebook has already turned to the use of AI to report offensive visual content. Currently this technology reports more offensive photos than humans on the network pick up. Last year Twitter followed this example, investing in visual intelligence startups MadBits to identify and flag harmful images.

The Huffington Post uses a machine learning algorithm called JuLiA — “Just a Linguistic Algorithm” — to sort through comments, identifying abusive language to aid moderators in providing a healthy interaction.

Others have turned to third-party technologies that can customize tools based on media sites’ preferences and needs, their target audiences and geographical locations and laws. These steps are an alternative to censorship by removing comments entirely, instead protecting the facilities that enable others to speak more freely.

Will more news sites bring back the comments? This all depends on how successfully Facebook and co. rid them of hatred. As the trolls become the internet norm, the media world is pulling out the big guns to overthrow them, arming A.I. with contextual understanding and advanced intelligence, and empowering communities to fight back.

Editors Notes:

I would remind you that this blog is produced free for the public good and you are welcome to republish or re-use this article or any other material freely anywhere without requesting further permission.

News & Views welcome always published as long as NO bad language or is not related to subject matter.

To keep online information secure, experts recommend keeping your social media accounts private, changing your passwords often, and never answering unsolicited emails or phone calls asking for your personal information. Need help and guidance visit https://acepchelp.wordpress.com and leave a comment

Ace News Services Site Links Listed Here:

AceTweet This News

Advertisements

About Ace Worldwide News Group

After 30 years of providing my services in Warwickshire in the United Kingdom. I am in the process of building a network of news sites in finance,business, property, social and healthcare under the name of "Ace News Group" together with providing goods and services through our sales and marketing news. I also have an organisation and fully fledged management consultancy agency. This provides contracts to enable people to provide their news, goods and services.

3 responses

  1. Great article Ian! It is a problem, no doubt. But for many people comment sections are their only avenue of having their voices heard. It’s sad that a few bad apples, as in every aspect of life, ruin it for the rest of us. FB is bringing in BILLIONS so I don’t feel too badly that they will have to staff themselves with more ‘offensive language’ monitors but for smaller companies it will be a burden. Then, of course, there is the whole censorship issue that opens up a whole other bag of worms! The topic in and of itself brings out the beast in many and can lead to the very issues trying to be avoided. Boy, talk about a catch 22!! Anyway, have a great weekend! 🙂

    Liked by 1 person