TECH

Fight against evil or step in the wrong direction? Apple's latest plans

Sven Krumrey

Apple wants to keep child sexual abuse material off their iCloud servers and away from minors–and has created an uproar among their fans and beyond. Because, however noble and necessary their cause, the technical implementation and unspoken general suspicion towards everyone sparks controversy. So what is Apple planning, how far can they go as a private company, and how are other service providers handling the issue? Let's find out!

Some apples are rotten on the inside

What happened?

Apple announced that, beginning this fall in the US, they will add three new features to iPhones, iPad and macOS devices meant to combat abuse and sexual molestation of children. A focal point will be detecting CSAM (child sexual abuse material), preventing its spread and reporting offenders to the authorities. That's a great idea and no cause for concern, right? Let's look at the new features in detail, starting with Siri and Search. In the future, users who come in contact with CSAM by whatever means will be offered information on how to report the material. Users who actively search for CSAM will be notified of its harmfulness and given information to seek professional help for their condition. Things will get more interesting for iMessage users under the age of 13: Their in and outgoing messages will be locally scanned for CSAM, provided users have Family Sharing enabled and parental approval has been given. Findings will be initially blurred and minors required to actively confirm to view the images, after which parents will be notified of the action–without either them or Apple ever seeing the relevant material.

The bone of contention

While these two features alone have already created quite a stir, it's the third one that takes the cake. All photos scheduled for iCloud uploading will be checked against a special CSAM database. Here's how: The files will be scanned and analyzed locally to generate a hash value using Apple's new NeuralHash procedure. Think of hash values as fairly unique digital fingerprints, like human DNA. The values will then be compared against said database that contains hash values of known CSAM images. This method is said to also compensate for minor modifications, like size adjustments or compression artifacts. Positive matches won't be reported immediately, Apple has set a threshold of around 30 matches before findings will be manually reviewed by a human. If the images are then confirmed to be CSAM, the National Center for Missing and Exploited Children (NCMEC) will be notified, including law enforcement agencies by extension. Affected user accounts will be suspended but users will receive no warning either when the first match occurs or when the threshold is exceed.

Apple’s move: everyone’s a suspect

What does Apple say?

Naturally, Apple insist they have found the optimal solution, also with regard to user privacy. They are particularly proud the scanning will take place on user devices so Apple will only learn about positive matches and only once the threshold is exceeded. In addition, false positives are said to be nigh impossible and users will have the right to object to their accounts being suspended. Since the image analysis isn't pixel-perfect and is performed through machine learning, the system will be both flexible and robust. Each image will be uploaded along with a safety voucher that deems it safe or unsafe. There is said to be a one-in-a-trillion chance of false positives. So all is well? It's not the current, or near-future, state of affairs that critics fear, it's the general trend this approach may have just ushered in.

Controversial even at Apple

Apple is known for secrecy and discretion but, this time, details about a lively debate among their employees have come out. The staff in Cupertino fear a disproportionate expansion of the new measures, e.g. driven by all too nosy governments, and a departure from the "What happens on your iPhone, stays on your iPhone" mantra that has long been part of Apple's DNA. The fear of interference by international intelligence agencies is already all too real: In mainland China, all iCloud data is being routed through local data centers–with a premium focus on data privacy, I'm sure. Then there's the never-ending fight between US authorities and everything encrypted and/or inaccessible to them–with the EU being a close second. Both here and there, efforts are underway to undermine encryption and monitor messages and chats on a massive scale. After all, citizens could potentially use their privacy to break the law.

Apple at the core of a great privacy debate

Encrypted = suspicious

Governments across the globe have long dreaded encrypted communication between users. And there have been several unsuccessful requests made by US authorities to Apple to unlock iPhones of suspected criminals. So far, Apple never complied, usually stating their unwillingness to create a precedent and their general technological inability, e.g. because of encryption. Once the above described iMessage and iCloud scanning is in place, the latter argument will likely fall flat. Both law enforcement and intelligence agencies will be having a field day. Since the scanning will take place before the uploaded files are encrypted, this will also compromise a holy grail of communication: end-to-end encryption. Many critics consider this a wide-open backdoor: Once the technology is installed, what's to stop Apple, or others, from scanning for different material as well?

Reliability concerns

Though Apple's approach is based on somewhat new technology, we've had AI-driven software-based scanning and filters for many years now. And we've had false positives time and time again. In the case of Apple, this would mean users being locked out of their accounts and data without notice. Something similar happened last year when some users were locked out of their Microsoft accounts without comment or warning. They had uploaded images of their babies in the bathtub (more or less naked or not wearing diapers) to OneDrive and were banned as a result. "You'd better not subscribe if you have photos of you wearing a skin-colored bra" users in some forums scoffed. That these images are perfectly legal in some countries, e.g. Germany, and that users are often totally unaware OneDrive syncing is active is completely irrelevant–to Microsoft. The company hides behind passages in their terms and conditions that ban "nudity, brutality and pornography" and leave ample room for interpretation. Apple users could now be facing a similar fate, especially since many are unaware they have iCloud syncing enabled. Cloud providers like to push their services by setting them as defaults or, some say, (re)enabling them after system updates–by accident, of course. In the case of Apple devices, this means that images will be proactively scanned for CSAM in the near future.

“What happens on your iPhone, stays on your iPhone”–right? “What happens on your iPhone, stays on your iPhone”–right?

How others are handling the situation

As a rule of thumb: Everything you upload to the cloud or social networks will be scanned for illegal content. This includes Facebook and Twitter as well as Dropbox, OneDrive and Google Drive, with either Microsoft's PhotoDNA or Google's Bedspread detector being used. Since providers can be held accountable for hosted files, this makes perfect sense. Numerous email providers act the same way, including Apple's iCloud-based email service that has included sporadic checks for years. Google's Gmail has been scanning for illegal content and collaborated with law enforcement agencies since 2014.

Doubts remain

Does the end (of CSAM?) justify the means or does this shotgun approach put every decent citizen under general suspicion while every criminal with half a brain has already fled to the Darknet and hidden behind complex encryption procedures? I believe it pays to adopt a critical stance and not condone everything just to avoid being dubbed a contrarian or someone who has "something to hide". It's also strange that private companies are given so much leeway here and only face sanctions in the event of serious law infringement. Maybe an international and unbiased oversight committee could provide assistance and assess procedures but we're not there yet. But is it legal for Apple to use my resources (battery, processor, time) to scan a device which they don't own? Is this a noble fight against child pornography or the prelude to mass surveillance on an unprecedented scale, as Edward Snowden fears?

You see, I'm on the fence about this development, however much I support the fight against child pornography. If you've already made up your mind, I'd love to hear your thoughts.

Research: Manuel Verlaat

Back to overview

Write comment

Please log in to comment