Purging bad actors
Twitter (TWTR) and Facebook (FB) are fed up with people who use their social networks to assail or misinform others. They’re continuing to develop features aimed at purging or downgrading the capacity of “bad actors” on their platforms.
Twitter recently announced that it will deploy algorithms to detect and punish accounts engaged in abusive conduct. It said using algorithms will help it police its platform in real time, potentially reducing the risk of people leaving Twitter because the company is slow to act on abusers.
Facebook has also made technology updates to make its social network safer for users. One of the company’s major platform problems with a risk to its reputation is fake news stories. From the United States (SPY) to Europe (EFA), there have been public outcries over rampant misinformation on Facebook. In response, the company recently launched a feature that assigns a “disputed” tag to news stories whose accuracy is questionable.
Risk of getting burned
But there’s a problem for both Facebook and Twitter in their platform-cleansing efforts. Twitter risks triggering subscriber outflow if its new technology ends up punishing innocent accounts. It’s already struggling with stagnating user growth, as you can see in the above graph, and now it faces more competition from Snap (SNAP). For Facebook, the war on fake news could cut the flow of fresh content that’s keeping its 1.9 billion users glued to the site.
There’s also the potential risk of Twitter and Facebook adversely impacting user engagement on their platforms. While they’re fighting a noble war, they could also be putting their bottom lines on the line.