Social media platforms and online marketplaces have been criticized for not enforcing guidelines that protect users from harmful content. However, the Section 230 rule protects these online entities from someone taking action against them. This has caused many to be confused on what Section 230 actually is and how it works. Here's the background of Section 230, explained.
Online activity is only going to increase in the future, especially with the rise of the metaverse and its technology. But if consumers are going to use these AR, VR, and MR devices and software, are they going to be protected from the online world's potential dangers, or will the companies be free from responsibility?
What is Section 230?
The Communications Decency Act was put in place to shield minors from gaining access to pornographic material. The act went into effect in 1996, and since then, opinions have been split on whether Section 230 has been beneficial for audiences or just the websites.
Arguments have been made that many news websites, especially in the political and entertainment sectors, intentionally post controversial content so that their comment sections can generate a buzz. The biggest concern around the guideline surrounds social media platforms. Many politicians have called for a change to Section 230 for various reasons, including online bullying.
Online bullying has been a constant issue, and platforms such as Facebook and Twitter have been criticized for not taking the necessary measures to reduce or eliminate the problem. Furthermore, some of the content that gets posted on these platforms can be harmful to minors and other audiences. Some speculate that because these platforms publish heavily and gain billions in revenue for doing so, they refrain from closely monitoring what gets published.
Online scams and malicious activities have been a huge problem as well. American consumers lost more than $4 billion in 2020 due to online scams, according to Business Insider. Many of those scams occurred on social media platforms, with younger audiences (despite being seen as more tech-savvy) often falling victim to these attacks.
While it could be difficult for these social networks to monitor and regulate every potential scam on their platforms, some have argued that because the platforms aren't held responsible for the scams, they tend to not implement enough security measures to prevent the scams.
What are some exceptions to Section 230?
If an online company is found to have contributed or promoted illegal activity, Section 230 provides no protection. A website can also not be protected by the rule if users post intellectual property that isn’t their own.
So, if users were to post videos that they don’t have the rights to, the platform could be held liable for copyright infringement. This is why it's common to see copyrighted content removed from social media platforms almost instantly if it wasn't posted by the original creator.
Will Section 230 be revised?
Whereas Joe Biden has publicly stated that he wants to revoke Section 230, many online companies have argued against that move. They've noted that they wouldn't be profitable because of the costs to monitor every post that gets published. However, given that social networks already remove much harmful content nearly instantly, why can’t they do that for everything else?