>Apple computes a hash of each image you upload to iCloud then check it against a list of CP hashes.
I don't think it computes a hash of the image, it's a tad more involved than that.
Simple hashing is easily evaded. They must be computing an identifier from the contents of the images in the CSAM database. This requires computational analysis on the handset or computer. If that's all that were happening that would be no problem, but of course there are management interfaces to the classifer/analyzer, catalog, backend, &c
The contents of the identifiers are purposefully opaque to prevent spoofing of the identifier database. I don't know what is included in the images; what if I take a picture at Disneyland with a trafficked person in the frame? Will that make it into the qualifier database? What is added to the CSAM signature database and why? What is the pipeline of hashesfrom NCMEC and other child-safety organizations->Apple's CSAM image classifer alarm?
>I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
The CSAM analyzer could be subverted in any number of ways. I question how the CSAM identifiers are monitored for QA (I actually shudder thinking there are already humans doing this :( how unpleasant.) and the potential for harmful adversaries to repurpose this tool for other means. One contrived counterfactual: Locating pictures of Jamal Kashoggi in people's computer systems by 0-day malware. Another: Locating images of Edward Snowden. A more easily conceived notion: Locating amber alert subjects in people's phones, geofenced or not.
To my eyes, it appears we will soon have increased analysis challenges. Self analysis of device activity and functions for image scanning malware (for example) is slightly harder, we have added a blessed one with unknown characteristics running on the systems. Does this pose a challenge to system profiling? How/does this interact with battery management? Is only iCloud scanning, or is everything scanned and then only checked before being sent to iCloud? (this appears to be the case[X])
There should be user notification too. If some sicko sends me something crazy somehow, I would surely want to know so I can call the cops!!
All in all this makes me feel bad. There is not a lot of silver lining from my perspective. While the epidemic of unconscionable child abuse continues, I question the effectiveness of this approach.
I would not consider jailbreaking my iPhone but for this kind of stuff. I would like to install network and permissions monitoring software on my iPhone such as Bouncer[0], Little Snitch[1], although these are helpfully not available for iOS.
I feel grateful that I am unlikely to be affected by this image scanning software, I'm planning to continue my personal policy of never storing any pictures of any people whatsoever. I don't even store family photos this way. My Life is not units in a data warehouse.
I don't think it computes a hash of the image, it's a tad more involved than that.
Simple hashing is easily evaded. They must be computing an identifier from the contents of the images in the CSAM database. This requires computational analysis on the handset or computer. If that's all that were happening that would be no problem, but of course there are management interfaces to the classifer/analyzer, catalog, backend, &c
The contents of the identifiers are purposefully opaque to prevent spoofing of the identifier database. I don't know what is included in the images; what if I take a picture at Disneyland with a trafficked person in the frame? Will that make it into the qualifier database? What is added to the CSAM signature database and why? What is the pipeline of hashesfrom NCMEC and other child-safety organizations->Apple's CSAM image classifer alarm?
>I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
The CSAM analyzer could be subverted in any number of ways. I question how the CSAM identifiers are monitored for QA (I actually shudder thinking there are already humans doing this :( how unpleasant.) and the potential for harmful adversaries to repurpose this tool for other means. One contrived counterfactual: Locating pictures of Jamal Kashoggi in people's computer systems by 0-day malware. Another: Locating images of Edward Snowden. A more easily conceived notion: Locating amber alert subjects in people's phones, geofenced or not.
To my eyes, it appears we will soon have increased analysis challenges. Self analysis of device activity and functions for image scanning malware (for example) is slightly harder, we have added a blessed one with unknown characteristics running on the systems. Does this pose a challenge to system profiling? How/does this interact with battery management? Is only iCloud scanning, or is everything scanned and then only checked before being sent to iCloud? (this appears to be the case[X])
There should be user notification too. If some sicko sends me something crazy somehow, I would surely want to know so I can call the cops!!
All in all this makes me feel bad. There is not a lot of silver lining from my perspective. While the epidemic of unconscionable child abuse continues, I question the effectiveness of this approach.
I would not consider jailbreaking my iPhone but for this kind of stuff. I would like to install network and permissions monitoring software on my iPhone such as Bouncer[0], Little Snitch[1], although these are helpfully not available for iOS.
I feel grateful that I am unlikely to be affected by this image scanning software, I'm planning to continue my personal policy of never storing any pictures of any people whatsoever. I don't even store family photos this way. My Life is not units in a data warehouse.
[0] - https://play.google.com/store/apps/details?id=com.samruston....
[1] - https://www.obdev.at/products/littlesnitch/index.html
[X] - Apple's Whitepaper: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...