Apple wants to keep child sexual abuse material off their iCloud servers and away from minors–and has created an uproar among their fans and beyond. Because, however noble and necessary their cause, the technical implementation and unspoken general suspicion towards everyone sparks controversy. So what is Apple planning, how far can they go as a private company, and how are other service providers handling the issue? Let's find out!
What happened?
Apple announced that, beginning this fall in the US, they will add three new features to iPhones, iPad and macOS devices meant to combat abuse and sexual molestation of children. A focal point will be detecting CSAM (child sexual abuse material), preventing its spread and reporting offenders to the authorities. That's a great idea and no cause for concern, right? Let's look at the new features in detail, starting with Siri and Search. In the future, users who come in contact with CSAM by whatever means will be offered information on how to report the material. Users who actively search for CSAM will be notified of its harmfulness and given information to seek professional help for their condition. Things will get more interesting for iMessage users under the age of 13: Their in and outgoing messages will be locally scanned for CSAM, provided users have Family Sharing enabled and parental approval has been given. Findings will be initially blurred and minors required to actively confirm to view the images, after which parents will be notified of the action–without either them or Apple ever seeing the relevant material.
The bone of contention
While these two features alone have already created quite a stir, it's the third one that takes the cake. All photos scheduled for iCloud uploading will be checked against a special CSAM database. Here's how: The files will be scanned and analyzed locally to generate a hash value using Apple's new NeuralHash procedure. Think of hash values as fairly unique digital fingerprints, like human DNA. The values will then be compared against said database that contains hash values of known CSAM images. This method is said to also compensate for minor modifications, like size adjustments or compression artifacts. Positive matches won't be reported immediately, Apple has set a threshold of around 30 matches before findings will be manually reviewed by a human. If the images are then confirmed to be CSAM, the National Center for Missing and Exploited Children (NCMEC) will be notified, including law enforcement agencies by extension. Affected user accounts will be suspended but users will receive no warning either when the first match occurs or when the threshold is exceed.
What does Apple say?
Naturally, Apple insist they have found the optimal solution, also with regard to user privacy. They are particularly proud the scanning will take place on user devices so Apple will only learn about positive matches and only once the threshold is exceeded. In addition, false positives are said to be nigh impossible and users will have the right to object to their accounts being suspended. Since the image analysis isn't pixel-perfect and is performed through machine learning, the system will be both flexible and robust. Each image will be uploaded along with a safety voucher that deems it safe or unsafe. There is said to be a one-in-a-trillion chance of false positives. So all is well? It's not the current, or near-future, state of affairs that critics fear, it's the general trend this approach may have just ushered in.
Controversial even at Apple
Apple is known for secrecy and discretion but, this time, details about a lively debate among their employees have come out. The staff in Cupertino fear a disproportionate expansion of the new measures, e.g. driven by all too nosy governments, and a departure from the "What happens on your iPhone, stays on your iPhone" mantra that has long been part of Apple's DNA. The fear of interference by international intelligence agencies is already all too real: In mainland China, all iCloud data is being routed through local data centers–with a premium focus on data privacy, I'm sure. Then there's the never-ending fight between US authorities and everything encrypted and/or inaccessible to them–with the EU being a close second. Both here and there, efforts are underway to undermine encryption and monitor messages and chats on a massive scale. After all, citizens could potentially use their privacy to break the law.
Encrypted = suspicious
Governments across the globe have long dreaded encrypted communication between users. And there have been several unsuccessful requests made by US authorities to Apple to unlock iPhones of suspected criminals. So far, Apple never complied, usually stating their unwillingness to create a precedent and their general technological inability, e.g. because of encryption. Once the above described iMessage and iCloud scanning is in place, the latter argument will likely fall flat. Both law enforcement and intelligence agencies will be having a field day. Since the scanning will take place before the uploaded files are encrypted, this will also compromise a holy grail of communication: end-to-end encryption. Many critics consider this a wide-open backdoor: Once the technology is installed, what's to stop Apple, or others, from scanning for different material as well?
Reliability concerns
Though Apple's approach is based on somewhat new technology, we've had AI-driven software-based scanning and filters for many years now. And we've had false positives time and time again. In the case of Apple, this would mean users being locked out of their accounts and data without notice. Something similar happened last year when some users were locked out of their Microsoft accounts without comment or warning. They had uploaded images of their babies in the bathtub (more or less naked or not wearing diapers) to OneDrive and were banned as a result. "You'd better not subscribe if you have photos of you wearing a skin-colored bra" users in some forums scoffed. That these images are perfectly legal in some countries, e.g. Germany, and that users are often totally unaware OneDrive syncing is active is completely irrelevant–to Microsoft. The company hides behind passages in their terms and conditions that ban "nudity, brutality and pornography" and leave ample room for interpretation. Apple users could now be facing a similar fate, especially since many are unaware they have iCloud syncing enabled. Cloud providers like to push their services by setting them as defaults or, some say, (re)enabling them after system updates–by accident, of course. In the case of Apple devices, this means that images will be proactively scanned for CSAM in the near future.
“What happens on your iPhone, stays on your iPhone”–right?
How others are handling the situation
As a rule of thumb: Everything you upload to the cloud or social networks will be scanned for illegal content. This includes Facebook and Twitter as well as Dropbox, OneDrive and Google Drive, with either Microsoft's PhotoDNA or Google's Bedspread detector being used. Since providers can be held accountable for hosted files, this makes perfect sense. Numerous email providers act the same way, including Apple's iCloud-based email service that has included sporadic checks for years. Google's Gmail has been scanning for illegal content and collaborated with law enforcement agencies since 2014.
Doubts remain
Does the end (of CSAM?) justify the means or does this shotgun approach put every decent citizen under general suspicion while every criminal with half a brain has already fled to the Darknet and hidden behind complex encryption procedures? I believe it pays to adopt a critical stance and not condone everything just to avoid being dubbed a contrarian or someone who has "something to hide". It's also strange that private companies are given so much leeway here and only face sanctions in the event of serious law infringement. Maybe an international and unbiased oversight committee could provide assistance and assess procedures but we're not there yet. But is it legal for Apple to use my resources (battery, processor, time) to scan a device which they don't own? Is this a noble fight against child pornography or the prelude to mass surveillance on an unprecedented scale, as Edward Snowden fears?
You see, I'm on the fence about this development, however much I support the fight against child pornography. If you've already made up your mind, I'd love to hear your thoughts.
Research: Manuel Verlaat
Isn't this how the Nazi's started? Having friends and family members spy on each other. Next you know the police will be notified (Even if it is a false lead) and will come in our home and take our computers and our guns.
As a German and former history student, I’ll say that the method is different, luckily, even though I don’t agree with Apple’s actions.
"The urge to save humanity is always a false front for the urge to rule it." --American writer H. L. Mencken (1880-1956)
"Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep...but those who torment us for our own good will torment us without end, for they do so with the approval of their own conscience." C.S. Lewis
Somehow you didn't receive the full comment from me from the beginning? I said I had a red flag from the very start of reading this article and thought about. Although it sounds good and I believe we must do all we can to protect our children being I am a mother, grandmother, & great grandmother. But we have to be very careful who we trust and the fact that people & governments can and have misinterpreted laws and other information given and opportunity to do so. We have witnessed this over the past 2 yrs in our nation and other nations. So I would have to say No to this! Because that kind of power in the wrong hands can and will be abused if given an opportunity to do so. To many promises have been broken by our government and people who do not have the same intentions as they say they do and will do. We see it in the news media and in the political arena. This is too great of responsibility to trust a government, corporation or any people or one person with. Especially regarding the well being and protection of all of our children! We will leave a Door open for all of us to be unprotected even more so then it has already been. Really think about it don't and just read it people. there is far more at risk that what it says and appears to be. Thank you!
With Apple being so closely tied to the CCP, are they actually scanning for Child porn or will they be scanning for secrets? Military info, competitors' plans........etc.
As the saying goes... give them an inch and they will take a mile.
The U.S. democrats, Google, WHO, CDC.........give them a foot.........you give them your life.
Hi Sven,
In this technological age how can a non-human distinguish an innocent photo of a naked baby from a photo of a naked child being used for child abuse.
The similar situation would arise with photos of naked, innocent people on a nudist beach.
This is a deflection the same way "national security" is. It starts out with a "noble" objective but soon enough turns into everything you mentioned in the article, basically a free for all for Gov't agencies all levels to "go fishing' in your business.
All they would need to say is CSAM and any warrant, search or arrest will happen just like they try to justify and contact, search etc with 'national security'
people. But does our Government truly love us as much as they do themselves and their own lives?