Technology Technology

Facing regulatory pressure, Meta says improved AI screening allows content moderation to keep up with violent, abusive content

Meta Platforms said today that its artificial intelligence systems are filtering out millions more pieces of violent or bullying content from its Facebook and Instagram social networks, allowing the prevalence of content seen by users to remain “relatively consistent.”
On Facebook, the social networking giant said in its quarterly content

To view the latest version of this document and thousands of others like it, sign-in to MLex or register for a free trial.

Mike Swift

Chief Global Digital Risk Correspondent


Mike Swift is an award-winning journalist who has been at the forefront of covering data, privacy and cybersecurity regulatory news for more than a decade. As the Chief Global Digital Risk Correspondent for MLex, in addition to reporting, he coordinates MLex’s worldwide coverage in the practice area. Formerly chief Internet reporter for the San Jose Mercury News and SiliconValley.com, Mike has covered Google, Facebook, Apple, Microsoft, Twitter and other tech companies and has closely tracked technology and regulatory trends in Silicon Valley. He has wide ranging expertise from the business of professional sports to computer-assisted reporting. A former John S. Knight Fellow at Stanford University, he is a graduate of Colby College.

Discover MLex

Stay on top of global regulatory developments

Latest News