Art to science in moderating internet content

This article describes the efforts of Facebook, Youtube, and similar hosts of user-generated content, to screen unacceptable material. (Both speech and images.) It’s apparently a grim task, because of the depravity of some material.  For the first decade, moderation methods were heavily ad hoc, but  gradually grew more complex and formalized in response to questions such as when to allow violent images as news. In aviation terms, it was at Stage 2: Rules + Instruments. Now, some companies are developing Stage  3 (standard procedures) and Stage 4 (automated) methods.

In May 2014, Dave Willner and Dan Kelmenson, a software engineer at Facebook, patented a 3D-modeling technology for content moderation designed around a system that resembles an industrial assembly line. “It’s tearing the problem [of huge volume] into pieces to make chunks more comprehensible,” Willner says. First, the model identifies a set of malicious groups — say neo-Nazis, child pornographers, or rape promoters. The model then identifies users who associate with those groups through their online interactions. …  This way, companies can identify additional or emerging malicious online activity. “If the moderation system is a factory, this approach moves what is essentially piecework toward assembly,” he said. “And you can measure how good the system is.”

Source: The secret rules of the internet | The Verge

MODERATION IS “A PROFOUNDLY HUMAN DECISION-MAKING PROCESS.”

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s