Update – Child pornography: a potential weakness in Apple’s CSAM system illustrated

Update: A researcher tested what Apple would use as a second method of verifying sensitive images to limit false positives, with success. More details at the end of the article.

Apple is not done defending its new child pornography detection system for iCloud. And the new discovery made by developer Asuhariet Ygvar a few stays ago will not help the Californian giant's affairs. Indeed, the man analyzed the so-called “NeuralHash” coding algorithm on which Apple's CSAM detection is based, coding found in iOS 14.3. He was even able to recreate his own version of the system in the Python programming language.

With this, he demonstrated that two very distant images could obtain the same NeuralHash identifier. And this is worrying given the false positives that Apple's detection of child pornography images could create.

This did not fail to provoke a reaction from Apple, which immediately clarified,from Vice, that the version seen on GitHub of image detection is not the one used by Apple for its CSAM system. Apple engineers have managed to implement a much more advanced and much more secure version of this tool.

In addition, as was specified by the Californian firm in one of itsdocuments techniques, a detection threshold must be exceeded for an iCloud account to be flagged for sharing child pornography. The threshold is 30 images. Knowing that in addition to this, after reporting the system, a verification is carried out by Apple employees directly to eliminate any risk of false positives. This point was presented in theFAQ published by the Cupertino company a few weeks ago.

We imagine that all precautions were in fact taken during the development of the system to avoid, on the one hand, respecting confidentiality for the user, and on the other hand, to limit false positives. Hoping this is enough.

Remember that this detection of child pornography content will be functional after the release of the next major OS, iOS 15 and macOS Monterey, before the end of 2021, therefore.

Updated August 20, 2021

In anew article from Motherboard, Apple said it uses an additional analysis technique to validate an image as sensitive child pornography content. This verification would take place on Apple's servers, after the first detection which took place locally, on users' machines, but before the last verification step which is carried out by Apple's teams directly, by hand.

This second check can therefore cancel a false positive, which reassures the discovery initially made by Asuhariet Ygvar and showing that the basic algorithm of Apple's CSAM system could be fooled with the creation of the same identifier for two photos identical.

This is what artificial intelligence specialist Brad Dwyer of the Roboflow firm has proven. The latterdemonstratedhow an artificial intelligence, OpenAI Clip in the case of its study, managed todifferentiate two images carrying the same NeuralHash. With such a system, many false positives would be eliminated, which would limit the work of the Californian firm's teams carrying out the last check before raising the alert.

Also read:

i-nfo.fr - Official iPhon.fr app

By : Keleops AG

Editor-in-chief for iPhon.fr. Pierre is like Indiana Jones, looking for the lost iOS trick. Also a long-time Mac user, Apple devices hold no secrets for him. Contact: pierre[a]iphon.fr.