Facebook Inc. has unleashed a new feature that will allow users to report other users who engage in hate speech or other illegal activity, in an effort to curb the spread and spread of extremism.
According to a blog post from the company, users can now report posts and comments that violate its guidelines.
The feature is part of the platform’s new reporting system, called Report Hate, which it announced on Monday.
The company has been criticized for the increased use of hate speech reports, with some observers calling the system a slippery slope to censorship.
But Facebook says the report feature is necessary because hate speech is a serious issue, and the company is committed to protecting free speech on its platforms.
“The content reported will be reviewed by our human reviewers to ensure that it does not violate our Community Standards and will not violate anyone’s privacy,” the company said.
“We will take action to remove hate speech when it’s identified and will take the appropriate actions, such as taking appropriate legal action against the people who spread it,” the post continued.
“We also take steps to block content that violates our Community guidelines.”
Facebook also announced on Tuesday that it will add a new “takedown” feature that it calls “Stop Hate.”
The feature will let users flag comments or posts they believe violate its terms of service.
This new feature was announced in a new blog post that was posted on the company’s website.
The new feature will require a report that includes a link to a post or post comment, which Facebook says it will publish on its site and will remove in the same manner as flagged content.
The company will not have to take action against a person who has reported a post, post or comment.
Users will be able to report comments, posts, photos and other content that they believe violates Facebook’s Community Guidelines, which include a commitment to “protect free speech, civility, and equality for all.”
Facebook said the new report tool will only be used for “reported content that is clearly abusive or illegal.”
The company did not say what specific content could be flagged, nor did it say when reports will be available.
Facebook has been under pressure for months over the issue of its platform’s ability to track people who post content deemed hateful or offensive.
The company says it has identified nearly 10,000 accounts in which hateful speech or content has been reported.
The platforms reported more than 4 million reports in the first quarter of this year, and it said it has increased its efforts to remove content from its platform by a third-party company each quarter.
The platform has also been criticized in the past for its use of its “troll-proof” feature, which requires people to sign in to their accounts and verify their identity before posting content on their platform.
Facebook also says that the feature is not intended to protect against online harassment, and that it’s a way for users to help keep the platform safe.