LIFE

Reach at all costs - What Facebook does and doesn't delete!

Sven Krumrey

Facebook's press releases are always a pleasure to read. The company regards itself as a facilitator for a world-wide community guided by love and understanding. If Zuckerberg was any cuter, it would be an idyllic place. Alas, all that is just a smokescreen to hide the hateful and violent truth. That's why Facebook is now legally forced to remove particularly despicable content. Still, questionable posts stay online - and an undercover journalist from the British Channel 4 recently discovered why.

Constantly under attack: Mark Zuckerberg

If Mark Zuckerberg's life was a cartoon, he'd be rolling in money but also be struck by falling anvils daily. His shareholders are salty because he's made ridiculous amounts of money but still fell short of their overblown expectations. This withdrawal of love already cost Facebook a quarter of their share value, i.e. 119.4 billion dollars. Now, politicians are forcing him to remove hateful speech or face severe fines. And as if that wasn't enough, he was recently duped by a reporter and had to give an interview without a lawyer or prepared statements - it was a disaster. To cap it off, an investigative journalist has now uncovered what was supposed to never become public - I see another anvil coming.

So what happened? A resourceful reporter from British TV station Channel 4 had faked his interest in becoming a Facebook moderator. These are the people that decide what stays and what goes on Facebook. What he learned during his training was cruel and inhumane. Apparently, depictions of violence and hate aren't generally deleted but rather subject to fairly dubious internal rules. Let's start with the obvious: Child and youth protection. Rule #1: Users below the age of 13 are not allowed on Facebook as stated in the general terms and conditions. Still, moderators are supposed to only delete posts from minors if said minors expressly state they're under the minimum required age. Even depictions of self-destructive behavior shall be handled as if posted by adults. This means, moderators will look the other way - after all, why scare away potential customers? When it comes to user profiles, something is better than nothing, right?

Enough fans and followers? You can stay! Enough fans and followers? You can stay!

Rule #2: Ban extremists - at least in theory. Extremists are experts at polarization and, you guessed it, generate a lot of attention, discussions, shares and clicks. That's why, on Facebook, the number of followers outweighs extremist views any day no matter how many complaints are piling up. They're treated like big media outlets or government organizations. To sum it up, popular and famous extremists can stay since they serve Facebook's overarching interests. This also applies to hateful speech against ethnic or religious groups. The latter are protected as per Facebook's guidelines but, again, there are unexpected shades of legality. For example, defaming Muslims in general is considered taboo, insulting Muslim immigrants is not as they're just a fraction of a religious group. The obvious question, why it is okay to stir up hatred against a group of people at all, remains unanswered.

Response times are also quite interesting. Facebook originally promised to check all flagged comments within 24 hours. And timing definitely matters when users are threatening to commit suicide, engage in other self-destructive behavior or attack others and their reputation. Online, it takes just a few hours to ruin someone's reputation for good when defamatory posts are read and shared by millions irrespective of their veracity. Behind closed doors, Facebook openly admits to their total inability to even remotely meet this deadline, breaking a commitment made to concerned politicians and consumer interest groups. It certainly takes time to see, understand and evaluate a post and moderators are tasked with processing a whopping 7,000 comments every day. What time is there for details under these conditions?

Not every depiction of violence gets removed Not every depiction of violence gets removed

Violence is a no-no on Facebook as every Facebook employee will attest to. Reality paints a different picture - and there's a method to it. During one training session, a video depicting child abuse was shown. Moderators have the option to delete, ignore or mark such content as offensive, which is considered a warning shot. Still, these videos often stay online as long as the accompanying texts do not glorify violence or signal disrespect. That's why a video showing a violent altercation between two teenagers (which are easily identifiable) wasn't deleted at the request of said teenagers because the associated comments condemned violence. Even a video depicting maltreatment of a small child has been available since 2012, and was even used during training, though it was marked "offensive". Users now have to perform an additional click to start watching while Facebook feels justice is served. Roger McNamee, one of the original big Facebook investors, calls a spade a spade: In his view, it's the extreme, dangerous and polarizing statements that attract and lock visitors into Facebook. Without them, people would spend less time, and watch less ads, on the portal.

To Facebook, it's all about reach, interaction and clicks. The goal is clearly not to protect users, since fake news or extremist content is tolerated as long as the viewing rates suit Facebook's business interests - and that is all they care about, everything else is secondary no matter who is harmed in the process. Facebook will likely defend themselves saying moderators are trained by subcontractors. But who would honestly believe these subcontractors aren't subject to Facebook's guidelines in one way or another? Even if the above cases were all isolated incidents, why hasn't Facebook altered their training methods yet? They seem to still follow the same procedures the reporter encountered during his training. It'd be great if, for once, Facebook put their money where their mouth is instead of putting out neat press releases.

What's your take on this? Is it even morally justifiable to keep using Facebook after these revelations?

Back to overview

Write comment

Please log in to comment