People Who Continually Post or Protest on Face Book Should Do Something

SAN FRANCISCO — Facebook is for the first time publishing its secret rules and guidelines for deciding what its 2.2 billion users can post on the social network, including hate speech, pornography, even cannibalism. It's also introducing an appeals process for Facebook users who believe their posts were removed in error.

Until now, Facebook had not publicly disclosed the lengthy rules given to itscontent reviewers to guide theirdecisions on whether to remove Facebook posts flagged for violating the Silicon Valley company's policies. And Facebook users, whose content was removed, had little recourse if they believed content reviewers made the wrong call.

Growing frustration that the policies — called community standards — are unclear and inconsistently enforced compelled Facebook to open up about how these decisions get made.

Facebook executives say they want users to understand how company policies are applied to their Facebook posts. The company is also looking for feedback from Facebook users on the policies.

"We think the new community standards are going to give people the knowledge they need to say: 'We think you applied the policy incorrectly,'" said Monika Bickert, vice president of global product management. "This is going to be a way to give people a real voice in this process."

The new appeals process will give Facebook users the option of asking a team of content reviewers to take a second look when individual pieces of content are deleted for nudity or sexual activity, hate speech or violence, she said.

How it works

If your photo, video or post is removed for violating Facebook's rules, you will be given the option to "Request Review." Appeals will be conducted by a "community operations" team within 24 hours. If Facebook determines it made a mistake removing content, it will be restored. Before the end of the year, Facebook will extend appeals to people who report content and are told it does not violate Facebook's rules.

Social networking service Facebook.

The moves announced Tuesday are the latest in a series of efforts to restore public trust after 87 million people had their data taken without their consent by Cambridge Analytica, a British political firm with ties to Donald Trump's presidential campaign.

They come amid growing scrutiny of blunders Facebook has made in policing content around the globe — from riots and lynchings sparked by the spread of hate speech and misinformation in countries such as Sri Lanka and Myanmar, to inflammatory posts attacking religions and races — even after U.S. users flagged them.

For example, Facebook refused to take down a photo of a black man missing a tooth and wearing a Kentucky Fried Chicken bucket on his head with the caption: "Yeah, we needs to be spending dat money on food stamps wheres we can gets mo water melen an fried chicken."

But Ijeoma Oluo had her Facebook account suspended after she posted screenshots of racist comments and threats she received following a social media post. While on a road trip, the black activist had stopped at a Cracker Barrel, which has paid millions to settle lawsuits over racial discrimination, and posted, "At Cracker Barrel 4 the 1st time. Looking at the sea of white folk in cowboy hats & wondering 'will they let my black ass walk out of here?'" Facebook later apologized and pledged to improve its "process on these important issues." It also removed some of the accounts that attacked Oluo.

Over the past decade, Facebook has developed dozens of rules to draw the line between what should and shouldn't be allowed on the platform to make Facebook a safer and less toxic place for its 2.2 billion users, from credible threats of harm to hate speech. The rules are interpreted differently from country to country depending on the language, culture and mores of Facebook users.

"Facebook seems to be taking a step forward by being a bit more open about its standards for filtering and moderating content," said Siva Vaidhyanathan, professor of media studies at the University of Virginia and author of Antisocial Media, an upcoming book on Facebook. "But it still calls these standards 'community standards' as if Facebook constitutes a community. ...  This whole process assumes there is — or perhaps creates — some global, universal standard of decency and appropriateness."

Facebook's growing influence over permissible speech in one of the most crowded corners of the Internet has alarmed many. Some say Facebook isn't doing enough to remove harmful content. Others say Facebook is quickly becoming the world's largest and most dangerous censor. Both sides had called for greater transparency.

The diversity of content reviewers and their bosses has also been called into question. Conservatives say there's too much influence from the left-leaning Silicon Valley, with its liberal leadership and employees. Others say not having content reviewers who reflect the racial and ethnic make-up of the communities they serve leads to mistakes in enforcement.

During Mark Zuckerberg's testimony on Capitol Hill, lawmakers accused Facebook of shutting down conservative voices such as the pro-Trump YouTube duo Diamond and Silk. Christian broadcasters have also complained that Facebook censors their voices.

Rep. Rep. G.K. Butterfield, D-N.C., challenged Zuckerberg on the company's poor track record on diversity. "This does not represent America," he said, waving a print-out of Facebook's all-white executive team.

"With the amount of content in our systems and the current systems we have in place to review, we have a small amount of mistakes, but that's too many," Zuckerberg said.

Rules on Facebook are different than laws in the United States and elsewhere. For example, Facebook defines hate speech as a direct attack — dehumanizing speech, statements of inferiority or calls for exclusion or segregation — on people from protected groups. That means Facebook deletes calls for violence or slurs that may be protected free speech in the U.S. under the First Amendment.

Facebook is not under any legal requirement to remove offensive content. A 1996 federal law shelters most technology companies from legal liability for the content users post on their services.

Facebook executives say it's a difficult balancing act to weigh what is acceptable expression and what is not.

An internal meeting is held every two weeks to review policies and update them when appropriate. Facebook also conducts weekly audits of content reviewers' work to make sure they are following the guidelines consistently, Bickert says. But with millions of posts flagged each week, critics say content reviewers frequently have to make snap decisions on complex questions.

During his testimony on Cambridge Analytica, Zuckerberg was asked by Sen. John Thune, a Republican from South Dakota, about the company's content screening policies and how it draws "the line between what is and what is not hate speech."

The Facebook CEO said his company would increasingly rely on artificial intelligence in the next five to 10 years. In October, Facebook announced plans to hire 10,000 more human content reviewers.

"We're developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook," he said. But he acknowledged there are challenges.

"Determining if something is hate speech is very linguistically nuanced," he said.

ramosdannot77.blogspot.com

Source: https://www.usatoday.com/story/tech/news/2018/04/24/facebook-discloses-secret-guidelines-policing-content-introduces-appeals/544046002/

0 Response to "People Who Continually Post or Protest on Face Book Should Do Something"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel