Why does anyone even assume that a bad actor would need to apply pressure?
These databases are unauditable by design -- all they'd need to do is hand Apple their own database of "CSAM fingerprints collected by our own local law enforcement that are more relevant in this region" (filled with political images of course), and ask Apple to apply their standard CSAM reporting rules.
Apple does not need to be able to audit the database to discover that it is not a CSAM database. Matches are reviewed by Apple before being reported to the authorities, so they would see that they are getting matches on non-CSAM material.
They wouldn't necessarily be able to tell if it was a false positive matching real CSAM material or a true positive matching illegitimate material in the databases put there by a government trying to misuse it, but they don't need to know whether it is or not. They just need to see that it isn't CSAM and so does not need to be reported.
That's even worse - so now apple is deciding whether to report something even if it matches the data provided by the government. It's not their role to judge the contents, only whether the match is correct or not, otherwise even with actual CSAM content, are they going to be making judgement calls? What if the system matches loli content which I imagine is in that database but legal in places? Are they going to then get the user's location(!!!!) To see whether it's legal there or not, or....guess? Or what? Because the only way to make this system work is to report every match and then let actual law enforcement figure out if it's illegal or not.
So yeah, the entire system is fucked and shouldn't exist. Apple is not law enforcement and them saying "we'll just prescreen every submission" is actually worse, not better.
> It's not their role to judge the contents, only whether the match is correct or not.
Which is what they would be doing.
Some government gives Apple a purported CSAM hash database, which Apple only accepts because it is a CSAM database. An image gets a match. Apple looks at it and it is not CSAM. Therefore, unless the government lied to them about the database, it must be a false positive and gets rejected as an incorrect match.
The rejection is not because Apple judged the content per se. They just determined that it must be a false positive given the government's claims about the database.
My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason. So like I said, then you get into this trap of - do apple employees start judging whether the match "should" count? What if a picture isn't actually pornographic but made it into the database(say a child in underwear, maybe it's there because of a connection to an abuse case, but it isn't a picture of abuse per se). Again, is this random person at apple going to be making judgement calls about validity of matches against a government provided database? Because again, I don't believe this can ever work. Maybe those are edge cases, sure, but my point is that as soon as you allow some apple employee to make a judgement, you are introducing new risks.
My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason.
The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.
And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places, there's no simple or obvious way to get them to match.
>>The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.
Have they actually said they would do that? I was under the impression that they just use the database of hashes provided by the American authority on prevention of child abuse.
>>And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places
I'm not sure I understand what you mean, can you expand?
In [1], "That includes a rule to only flag images found in multiple child safety databases with different government affiliations — theoretically stopping one country from adding non-CSAM content to the system."
1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.
2) Apple has trillions of dollars to lose by selling out to a shitty change like that.
Do those things mean nothing bad can come of it? Hell no. But, right now we have FISA courts and silent warrants sucking in data without anyone knowing or being able to talk about it. It's not like we're a panacea at the moment.
Apple's approach creates a possibility of slowing down the politics already trying to move against E2EE data.
This is a political fight, and if we all act like ideologues we're going to lose it all in the end.
Apples move gives away another piece of the puzzle to mass monitoring and surveillance.
Apples move brings it one step closer for FISA courts and silent warrants to have access beyond what we send over the network to now what resides on our phones.
Apple is giving mass surveillance a foot hold into living on and monitoring data on our personal phones.
Apple can’t be selling out if they literally have no way of knowing what is in the database of illicit content.
That is why they said that the content has to be in two separate nation’s databases. Of course, there is no information that I’ve seen about what other nation’s db they would use. And without another nation, there will be no content in the database? I doubt it.
Regardless, it’s a moot issue, since we already know that the 5 eyes all conspire and would gladly add content for each other, the same as several middle-east nations.
Right now they have no ability to scan every photo on every iOS device for “objectionable” content (as defined by them on that day, based on their mood). But soon they will. All they have to do is add photos to the ncemc and an equivalent db in another country.
> 2) Apple has trillions of dollars to lose by selling out to a shitty change like that.
Apple has trillions to lose by building this system in the first place. All it takes is one court order to do non-CP scanning with the existing system.
My family, my wife's family, and much of our extended families are all privacy conscious (we chat over Signal, just like we close our window blinds at night)... and they are all alarmed over this and ready to ditch Apple if they don't change their stance on this.
All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says. This is true even if the phone has iCloud switched off (most of us already do), since all Apple has to do is change a Boolean variable's value, or something similar requiring no creative effort (and hence legally coercible).
> This is total and utter bullshit. It is a complete misunderstanding of how the system works.
No it's not. It's an extrapolation based on known facts.
Governments can and will coerce companies into changing things about their systems, but the main limitation is that at least in the US, the government can't coerce creative work.
But that's just the thing - flipping a bit here or there, or adding non-CSAM image hashes to the database, is all possible and accessible to wiretap warrants, since doing so is a simple change to a system that already exists.
There's a principle here: trust good security system design, but do not trust system operators, since they can be coerced (legally or otherwise). A secure system that requires trusting particular people who operate it is not designed well.
No it’s not. It’s bullshit based on a complete lack of understanding of the system.
Read any of the public documentation on how it works, and you’d know it’s completely wrong.
There are multiple good reasons to oppose this system. But this:
> All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says.
Is still complete bullshit. No part of it is true or an extrapolation of known facts. It’s neither true as a technical possibility not even as a legal one.
Quoting from Edward Snowden: "I intentionally wave away the technical and procedural details of Apple’s system here, some of which are quite clever, because they, like our man in the handsome suit, merely distract from the most pressing fact—the fact that, in just a few weeks, Apple plans to erase the boundary dividing which devices work for you, and which devices work for them."
So the details really don't matter as much as the fact that this creates a huge precedent. And, quoting Snowden again, "There is no fundamental technological limit to how far the precedent Apple is establishing can be pushed, meaning the only restraint is Apple’s all-too-flexible company policy, something governments understand all too well." (emphasis mine)
So I might read through that PDF, I might even understand it, but what it can't dissuade me from is a persuasion that yes, governments can and will take advantage of this turn of events, and no, there is no technological or legal barrier that can keep them from doing so once this is in place.
>1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.
Yes... Because it is impossible for different servers to be configured as backends depending on where handsets are destined to be sold. It's not like it's possible to quickly whip up a geolocation aware API that can swap things out on the fly. C'mon. This isn't even hard. These are all trivially surmountable problems. The one thing standing in the way of already having done this, was there was no way in hell anyone would have been daft enough to even try doing something like this with a straight face. For heaven sake, even Hollywood lampshaded it with that "using cell phones as broadband sensors" shtick in the Dark Knight or whatever it was.
>This is a political fight, and if we all act like ideologues we're going to lose it all in the end.
The fight was lost the moment someone caved to "think of the children". Every last warning sign that has been left all over the intellectual landscape ignored, history, risk, human nature, any semblance of good sense ignored.
Honestly, I'm sitting here scratching my head wondering if I took a wrong turn or something 20 years ago. This isn't even close to the place I lived anymore.
These databases are unauditable by design -- all they'd need to do is hand Apple their own database of "CSAM fingerprints collected by our own local law enforcement that are more relevant in this region" (filled with political images of course), and ask Apple to apply their standard CSAM reporting rules.
That's it... Tyranny complete.