TECH

How YouTube is struggling to keep billions under control

Sven Krumrey

YouTube, Facebook and many other media platforms are facing a problem: a lot of smut is distributed through their channels. That has always been the case but, recently, it has become a threat since extremists of all sorts have begun to use their channels to spread propagandist and violence-glorifying content. As new privacy laws are passed and advertising sponsors put on the pressure, the media giants have to either find better ways to handle the deluge of user posts or risk hefty fines. Artificial intelligence (AI) has been touted as the magic bullet but are algorithms really the solution?

Can artificial intelligence fully replace humans?

To give you an idea of the scope I'm talking about: 500 hours of video content are uploaded to YouTube every minute - and counting. It would require hundreds of thousands of workers to review and, if necessary, delete them. And it's a golden opportunity to become a big employer - Google has the funds after all! Instead of a measly 80,000 employees world-wide, 2,500,000 additional jobs could be created to give back a few of those billions to the community. Naturally, that's out of the question. Profits would be declining and share holders surely threaten with self-immolation. That's why Google is leaving this issue in the hands of technology.

Here's the plan: human workers have marked 2 million videos for deletion by adding certain markers to further specify the cause. Self-learning machines analyze the data and scan both audio and video tracks to learn about humans and objects in context (or situations). Even text overlays along with political or religious symbols are recognized. The objective: to find and remove v iolence-glorifying content, terrorist propaganda, hate speech, SPAM and, naturally, nudity.

Today, artificial intelligence has already replaced many human workers. Today, artificial intelligence has already replaced many human workers.

The algorithms are continuously refined with each iteration. Which videos are showing a bombing, swastika or uncovered female breast? In the past, censors were already quite swift when it came to pornography but other illegal content is now also slowly coming into focus. Affected videos are marked and later deleted from the portal. Of over 8 million recently deleted videos, a whopping 6,6 million were identified through AI while human workers and user feedback did the rest. Many videos hadn't even become publicly viewable yet, while the video portal is celebrating, the devil is in the detail.

As of late, problem cases have been piling up since the technology doesn't always act as intended. War crime documentaries that serve to foster education were erroneously deleted and so were historical movies. The algorithms detected the depiction of inhuman practices but failed to grasp the intention behind the movies. Such are the limits of AI to this day: it can spot questionable content but it can't decipher the rationale behind it (yet). The same applies to nudity: nude paintings, as common in the fine arts, also met with disapproval from the virtual jury and were likewise deleted. After all, how can algorithms tell the difference between artful nudity and obscene home videos? It seems companies c an't do without common (human) sense just yet.

Which of the countless videos contain illegal content? Which of the countless videos contain illegal content?

Satire is also beyond a machine's comprehension. While many of us can laugh at Monty Python's Nazi jokes, computers are totally devoid of any sense of humor. The closer the jokes stick to the "original", the quicker they'll face automated deletion. That's why many users see signs of of a digital inquisition on the horizon. Though they welcome YouTube's struggle to no longer be a cesspool of extremist, hateful or confused minds, they criticize the shotgun approach exhibited by the AI. Today, investigative journalists or organizations that document war crimes are facing permanent suspension of their channels. Even G-rated garden party videos are deleted because the AI misinterprets bare skin. By contrast, videos uploaded by pedophiles stay up because these people know how to exploit the AI's weaknesses through subtlety. No algorithm can decipher the many possible shades to a topic (yet).

It seems, human workers will remain indispensable for some time to come to evaluate said shades and YouTube will have to comply with some form of binding standard to stay relevant. They will also have to be more transparent: presently, users receive no explanation as to why their videos were blocked. YouTube has vowed to respond quicker to questions and to provide insights into the implementation of their guidelines. That should be a given, but, in the case YouTube, it actually means progress. They've also recruited additional staff, if only in moderation. Apparently, YouTube themselves don't trust their AI very much and that's at least some comfort.

What I would like to know: do you believe artificial intelligence is the way to go here or is common (human) sense still a necessity?

Back to overview

Write comment

Please log in to comment