People who have a tendency towards antagonistic views on Twitter may soon find themselves banned quicker than ever before. The social media giant announced this week that they have started relying on software to find and ban “extremist” users instead of relying on reports from other users.
As social media is more and more becoming a mouthpiece for people to incite unrest, both American and European governments have been calling for measures to fight radical views on social media, especially those calling for violence.
Twitter announced the changes as part of their twice yearly transparency report. In the last six months of 2016, the company said, they suspended over 370,000 accounts for violations related to promotion of terrorism. That an average of over 60,000 a month – up about 25,000 a month from the same time last year.
Considering recent “extremist” bans, Twitter said that about 75% of people booted off the service for encouraging political or religious violence were indeed first identified by “internal, proprietary spam-fighting tools.” Less than 2% of bans came as a result of user reports. That’s a fairly stark increase compared to this time last year, when less than 1/3 of takedowns came from software.
It was back in August of 2016 that Twitter removed more than 230,000 accounts specifically connected with terrorism. That brought their total to 350,000 removed accounts on the year. So while their automatic software has been in place for a while, it’s clearly becoming more useful.
Twitter has also taken steps to fight what they call “whack a mole” syndrome. Previously, if a user was banned, it was incredibly easy for them to simply create a new account and continue causing trouble. But now, a user who finds themselves kicked off the service for inciting extremism will have a much more difficult time joining again.
While Twitter notes that their software certainly isn’t a “magic algorithm,” it’s still a great step towards making the site a safer place.