In recent years, governments have repeatedly heads up Facebook and other social media platforms to do a better job of removing extremist content — specifically, anything promoting terrorism.
The social media platforms already take precautionary measures to take down all these contents.
Many have turned to Artificial Intelligence (AI) to help them answer that call, but now a study by The Atlantic has revealed that these AIs might inadvertently be helping terrorists get away with their heinous crimes.
This is by deleting valuable evidence against them.
The Atlantic piece cites a 2017 Facebook video in which a terrorist oversees the execution of 18 people.
The media platform had removed the video, but not before it could spread across the internet.
People all across the globe analyzed the video, which led to the discovery that the execution took place in Libya and that the man ordering it was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander.
The subsequent warrant for Werfalli’s arrest included several references to the Facebook video and others like it.
Since then, the content-filtering algorithms used by Facebook, YouTube, and the like have gotten far more advanced— they now automatically remove huge swathes of extremist content, sometimes even before it reaches the eyes of a single user.
The measure made a “win” in many respects — but the trade-off may be the loss of evidence that prosecutors could use to hold warlords, dictators, and terrorists accountable for their crimes.