Government Leaders Call For Section 230 to Be Revised

Section 230 has been a controversial rule for decades. Will the rule soon be removed from law? Here's the background of Section 230, explained.

Ade Hennis - Author
By

Jan. 4 2022, Published 4:40 a.m. ET

Section 230
Source: Getty Images

Social media platforms and online marketplaces have been criticized for not enforcing guidelines that protect users from harmful content. However, the Section 230 rule protects these online entities from someone taking action against them. This has caused many to be confused on what Section 230 actually is and how it works. Here's the background of Section 230, explained.

Article continues below advertisement
Article continues below advertisement

Online activity is only going to increase in the future, especially with the rise of the metaverse and its technology. But if consumers are going to use these AR, VR, and MR devices and software, are they going to be protected from the online world's potential dangers, or will the companies be free from responsibility?

What is Section 230?

Part of the Communications Decency Act, Section 230 protects online platforms from being held liable for users' actions on them, such as users' comments or content they post.

Article continues below advertisement
gettyimages
Source: Getty Images

The Communications Decency Act was put in place to shield minors from gaining access to pornographic material. The act went into effect in 1996, and since then, opinions have been split on whether Section 230 has been beneficial for audiences or just the websites.

Article continues below advertisement
Article continues below advertisement

Arguments have been made that many news websites, especially in the political and entertainment sectors, intentionally post controversial content so that their comment sections can generate a buzz. The biggest concern around the guideline surrounds social media platforms. Many politicians have called for a change to Section 230 for various reasons, including online bullying.

Online bullying has been a constant issue, and platforms such as Facebook and Twitter have been criticized for not taking the necessary measures to reduce or eliminate the problem. Furthermore, some of the content that gets posted on these platforms can be harmful to minors and other audiences. Some speculate that because these platforms publish heavily and gain billions in revenue for doing so, they refrain from closely monitoring what gets published.

Article continues below advertisement

Online scams and malicious activities have been a huge problem as well. American consumers lost more than $4 billion in 2020 due to online scams, according to Business Insider. Many of those scams occurred on social media platforms, with younger audiences (despite being seen as more tech-savvy) often falling victim to these attacks.

Article continues below advertisement

While it could be difficult for these social networks to monitor and regulate every potential scam on their platforms, some have argued that because the platforms aren't held responsible for the scams, they tend to not implement enough security measures to prevent the scams.

Article continues below advertisement

What are some exceptions to Section 230?

If an online company is found to have contributed or promoted illegal activity, Section 230 provides no protection. A website can also not be protected by the rule if users post intellectual property that isn’t their own.

Article continues below advertisement
Article continues below advertisement

So, if users were to post videos that they don’t have the rights to, the platform could be held liable for copyright infringement. This is why it's common to see copyrighted content removed from social media platforms almost instantly if it wasn't posted by the original creator.

Will Section 230 be revised?

Whereas Joe Biden has publicly stated that he wants to revoke Section 230, many online companies have argued against that move. They've noted that they wouldn't be profitable because of the costs to monitor every post that gets published. However, given that social networks already remove much harmful content nearly instantly, why can’t they do that for everything else?

Advertisement

Latest Media & Entertainment News and Updates

    Opt-out of personalized ads

    © Copyright 2024 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.