receive a warning from YouTube if comment is offensive


Over the past year, YouTube has been flagging offensive comments, especially those that contain hate speech. Users will receive a warning message when they post a comment that contains hate speech, or one that may be considered offensive.

receive a warning from YouTube if comment is offensive

Users will receive warning from YouTube if their comment is offensive

For the firm, producers, and consumers of YouTube, toxic and nasty comments have been a persistent source of frustration. The company has already made an effort to stop this by implementing tools like alerting users at the time of publishing so that they can be more thoughtful.

The streaming service is now rolling out a new tool that will more aggressively remind these people of their unpleasant comments and prompt them to take more drastic action.

According to YouTube, users whose abusive comments have been deleted for breaking the platform's guidelines will receive notification. The service will prevent a user from publishing any further comments for 24 hours if they continue to submit abusive comments after receiving the message. Prior to today's deployment, the business claimed to have tested the functionality and discovered that notifications and timeouts were a significant success.

languages supported

Only English-language comments can currently be flagged as having hostile comments, but the streaming service plans to add more languages in the future. Notably, both English and Spanish versions of the pre-posting warning are accessible.

The firm stated that its objectives were to "protect creators from users trying to harm the community through comments, as well as providing more transparency to users who may have had comments removed to policy infractions and perhaps assist them understand our Community Guidelines."

A person can provide feedback if they believe their comment was improperly removed. However, the business did not state if it would reinstate the remarks after reviewing the input.

Additionally, YouTube stated in a forum post that it has been attempting to enhance its AI-based detection systems. According to the corporation, 1.1 billion "spammy" comments were deleted in the first half of 2022. According to YouTube, the platform has improved its bot detection and removal tools for live chat videos.

Utilizing automatic detection has helped YouTube and other social networks eliminate spam and abusive content. However, to mislead the system, abusers frequently utilize various slang terms or misspell things. Additionally, it is more difficult to deter people from making hostile remarks in languages other than English.

To lessen abusive remarks on the platform, the streaming firm has recently explored a variety of methods. These testing involved presenting a user's comment history on their profile cards and by default hiding comments.

YouTube launched a tool last month that allowed creators to shield a specific user from comments. The user's harsh remarks on another video won't appear because this control covers the entire channel.

Platforms all across the world are struggling to stop the spread of offensive comments

When England footballers Bukayo Saka, Marcus Rashford, and Jadon Sancho were tormented for missing penalty kicks in last year's Euro championships, Instagram served as a breeding ground for them.

According to a recent analysis from GLAAD and Media Matters, anti-LGBTQ insults have increased sharply since Elon Musk assumed control of Twitter. The number of hostile and abusive comments continues to be a major issue, despite the fact that all of these platforms have introduced features to mute or conceal comments and restrict comments to select people.


Post a Comment

Previous Post Next Post