The rise of social media platforms has transformed the way people communicate, share information, and interact with each other. However, along with its benefits, social media has also become a breeding ground for online hate speech.
According to a 2020 study by the Anti-Defamation League, there has been a 55% increase in reported cases of online hate speech since 2015, with social media platforms being the primary medium for such incidents.
To tackle this problem, a new regulator, Coimisiun na Mean, has been established to enforce compliance with internet safety guidelines and address online hate speech.
This move comes after several incidents of extreme violent content ads being accepted by social media platforms despite having systems in place to detect and remove hate speech. With the new regulatory framework in place, there is hope for a safer online environment and better media control.
This article will explore the extent of the problem, social media response, regulatory framework, enforcement powers, and the future of online safety.
Extent of the Problem
The prevalence of online hate speech on major social media platforms has been demonstrated by the acceptance of extreme violent content in advertisements by YouTube and TikTok, which Facebook rejected some of, despite the existence of systems to detect and prevent such content. This highlights the need for a more effective approach to regulating online content, particularly in relation to hate speech.
The establishment of a new regulator, Coimisiun na Mean, with the power to impose significant financial penalties and enforce online safety codes, is a step towards addressing this issue.
Despite the efforts of social media companies to prevent hate speech, it is clear that their systems are not foolproof and that harmful content can still slip through the cracks. The new regulator will have the authority to investigate and take action against any service suspected to be non-compliant, and will be able to create and apply obligations through binding online safety codes.
These codes will require designated online services to take measures to tackle the availability of defined categories of harmful online content, including hate speech, and can regulate commercial communications such as advertising and sponsorship.
Social Media Response
Companies that provide internet services are required to comply with the regulations set by the new regulatory authority to ensure that harmful content, including hate speech, is not broadcasted on their platforms. Social media giants have responded to this new development by stating their commitment to combating online hate speech. Facebook owners Meta have explicitly stated that hate speech has no place on their platforms, while TikTok has implemented strict rules for ads containing hate speech or hateful behaviour.
Despite these measures, hate speech continues to slip through the cracks, underscoring the need for more stringent regulations and oversight. The new regulatory authority, Coimisiun na Mean, has been granted substantial authority to monitor and ensure compliance with internet safety guidelines. The authority can impose significant financial penalties on non-compliant companies, which may serve as a powerful deterrent to companies that have been lax in their efforts to combat online hate speech.
Regulatory Framework
Implementation of the Online Safety and Media Regulation (OSMR) Act sets the stage for the creation of a safer internet environment and better media control.
Coimisiun na Mean, the new regulator of the media industry, is given substantial authority to monitor and ensure compliance with the internet safety guidelines.
This regulatory framework aims to tackle the availability of defined categories of harmful online content, including hate speech, and can regulate commercial communications such as advertising and sponsorship.
The regulator has the power to create and apply obligations through binding online safety codes. These codes will require designated online services to take measures to combat the availability of harmful online content.
Moreover, Coimisiun na Mean can impose a significant financial penalty of up to €20 million or 10% of turnover.
The OSMR Act serves as the legal basis for the online safety commissioner to set up individual complaint approaches for digital platforms.
While individuals will soon be able to report websites containing potentially damaging material, the department refuses to create an individual complaint system until regulation has had sufficient time to set in.
Overall, the OSMR Act and the creation of Coimisiun na Mean demonstrate a commitment to creating a safer online environment and better media control.
Enforcement Powers
Coimisiun na Mean’s enforcement powers under the OSMR Act are a significant step towards creating a safer online environment, with the ability to impose substantial financial penalties and create binding online safety codes that require designated online services to take measures against harmful online content, including potentially devastating consequences for non-compliant platforms.
This regulatory framework gives Coimisiun na Mean the authority to investigate and take action against any service suspected to be non-compliant with online safety guidelines, with the power to create and apply obligations through these binding codes.
The consequences for non-compliant platforms can be severe, with Coimisiun na Mean able to impose a financial penalty of up to €20 million or 10% of turnover. These enforcement powers serve as a strong deterrent for platforms that may otherwise be lax in their approach to harmful online content.
Ultimately, these measures are designed to ensure a safer online environment, with better media control and greater protection for individuals against online hate speech and other harmful content.
Future of Online Safety
The future of online safety is likely to involve ongoing efforts to combat harmful online content, with a focus on developing new strategies for identifying and addressing hate speech and other forms of harmful online behavior.
The new regulator Coimisiun na Mean has been given substantial authority to monitor and ensure compliance with internet safety guidelines, including the power to impose significant financial penalties for non-compliance.
Additionally, the Online Safety and Media Regulation Act serves as the legal basis for the online safety commissioner to set up individual complaints approaches for digital platforms, giving individuals the ability to report websites containing potentially damaging material.
To further combat online hate speech, the new regulator can create and apply obligations through binding online safety codes that require designated online services to take measures to tackle the availability of defined categories of harmful online content.
These codes can also regulate commercial communications, such as advertising and sponsorship, made available on those services.
The categories of harmful online content include online content linked to 42 existing offenses, including those under the Harassment, Harmful Communications and Related Offences Act 2020 and the Prohibition of Incitement to Hatred Act 1989.
While the department refuses to create an individual complaint system until regulation has had sufficient time to set in, it’s clear that the OSMR Act is designed to ensure a safer online environment and better media control, setting the stage for ongoing efforts to combat harmful online content.