This technology will soon be out of Apple’s control. Higgins correctly highlights the immense pressure Apple will get from governments and other actors to bend the technology and use it for something else than csam. It will happen, people are probably already thinking how to apply such pressure. Sooner or later Apple will cave in and they will have only themselves to blame when freedom supports in Sudan or LGBTQ activists in Saudi Arabia will be jailed. The issue here is not where to draw the line, but who will draw it (spoiler: not Apple).
Exactly. Governments tend to accept "we lack the technical capability to comply with your request" (unless said capability is legally mandated, e.g. so-called "lawful intercept"). They do not tend to accept "we possess the technical capability to comply with your request but choose not to do so".
Ten years from now we will read a leaked document stating that US authorities requested FISA Courts warrants (or similar) and made Apple scan for things other than csam. It already happened.
Court orders are easier than hacks once you iron out the legal opinions.
Even if that's true, this method at least drops some part of the process on the client where it can be inspected, vs. all scanning happening in the cloud, entirely behind closed doors.
Does that mean nothing bad can happen? No. But it does mean that when something changes, we at least know something changed.
The database will change every update. It’s not like the NCEMC db already has all possible CSAM. Every time they find more, they add it to the DB. Thus, it will probably change every time Apple pushes an OS update.
Why does anyone even assume that a bad actor would need to apply pressure?
These databases are unauditable by design -- all they'd need to do is hand Apple their own database of "CSAM fingerprints collected by our own local law enforcement that are more relevant in this region" (filled with political images of course), and ask Apple to apply their standard CSAM reporting rules.
Apple does not need to be able to audit the database to discover that it is not a CSAM database. Matches are reviewed by Apple before being reported to the authorities, so they would see that they are getting matches on non-CSAM material.
They wouldn't necessarily be able to tell if it was a false positive matching real CSAM material or a true positive matching illegitimate material in the databases put there by a government trying to misuse it, but they don't need to know whether it is or not. They just need to see that it isn't CSAM and so does not need to be reported.
That's even worse - so now apple is deciding whether to report something even if it matches the data provided by the government. It's not their role to judge the contents, only whether the match is correct or not, otherwise even with actual CSAM content, are they going to be making judgement calls? What if the system matches loli content which I imagine is in that database but legal in places? Are they going to then get the user's location(!!!!) To see whether it's legal there or not, or....guess? Or what? Because the only way to make this system work is to report every match and then let actual law enforcement figure out if it's illegal or not.
So yeah, the entire system is fucked and shouldn't exist. Apple is not law enforcement and them saying "we'll just prescreen every submission" is actually worse, not better.
> It's not their role to judge the contents, only whether the match is correct or not.
Which is what they would be doing.
Some government gives Apple a purported CSAM hash database, which Apple only accepts because it is a CSAM database. An image gets a match. Apple looks at it and it is not CSAM. Therefore, unless the government lied to them about the database, it must be a false positive and gets rejected as an incorrect match.
The rejection is not because Apple judged the content per se. They just determined that it must be a false positive given the government's claims about the database.
My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason. So like I said, then you get into this trap of - do apple employees start judging whether the match "should" count? What if a picture isn't actually pornographic but made it into the database(say a child in underwear, maybe it's there because of a connection to an abuse case, but it isn't a picture of abuse per se). Again, is this random person at apple going to be making judgement calls about validity of matches against a government provided database? Because again, I don't believe this can ever work. Maybe those are edge cases, sure, but my point is that as soon as you allow some apple employee to make a judgement, you are introducing new risks.
My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason.
The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.
And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places, there's no simple or obvious way to get them to match.
>>The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.
Have they actually said they would do that? I was under the impression that they just use the database of hashes provided by the American authority on prevention of child abuse.
>>And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places
I'm not sure I understand what you mean, can you expand?
In [1], "That includes a rule to only flag images found in multiple child safety databases with different government affiliations — theoretically stopping one country from adding non-CSAM content to the system."
1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.
2) Apple has trillions of dollars to lose by selling out to a shitty change like that.
Do those things mean nothing bad can come of it? Hell no. But, right now we have FISA courts and silent warrants sucking in data without anyone knowing or being able to talk about it. It's not like we're a panacea at the moment.
Apple's approach creates a possibility of slowing down the politics already trying to move against E2EE data.
This is a political fight, and if we all act like ideologues we're going to lose it all in the end.
Apples move gives away another piece of the puzzle to mass monitoring and surveillance.
Apples move brings it one step closer for FISA courts and silent warrants to have access beyond what we send over the network to now what resides on our phones.
Apple is giving mass surveillance a foot hold into living on and monitoring data on our personal phones.
Apple can’t be selling out if they literally have no way of knowing what is in the database of illicit content.
That is why they said that the content has to be in two separate nation’s databases. Of course, there is no information that I’ve seen about what other nation’s db they would use. And without another nation, there will be no content in the database? I doubt it.
Regardless, it’s a moot issue, since we already know that the 5 eyes all conspire and would gladly add content for each other, the same as several middle-east nations.
Right now they have no ability to scan every photo on every iOS device for “objectionable” content (as defined by them on that day, based on their mood). But soon they will. All they have to do is add photos to the ncemc and an equivalent db in another country.
> 2) Apple has trillions of dollars to lose by selling out to a shitty change like that.
Apple has trillions to lose by building this system in the first place. All it takes is one court order to do non-CP scanning with the existing system.
My family, my wife's family, and much of our extended families are all privacy conscious (we chat over Signal, just like we close our window blinds at night)... and they are all alarmed over this and ready to ditch Apple if they don't change their stance on this.
All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says. This is true even if the phone has iCloud switched off (most of us already do), since all Apple has to do is change a Boolean variable's value, or something similar requiring no creative effort (and hence legally coercible).
> This is total and utter bullshit. It is a complete misunderstanding of how the system works.
No it's not. It's an extrapolation based on known facts.
Governments can and will coerce companies into changing things about their systems, but the main limitation is that at least in the US, the government can't coerce creative work.
But that's just the thing - flipping a bit here or there, or adding non-CSAM image hashes to the database, is all possible and accessible to wiretap warrants, since doing so is a simple change to a system that already exists.
There's a principle here: trust good security system design, but do not trust system operators, since they can be coerced (legally or otherwise). A secure system that requires trusting particular people who operate it is not designed well.
No it’s not. It’s bullshit based on a complete lack of understanding of the system.
Read any of the public documentation on how it works, and you’d know it’s completely wrong.
There are multiple good reasons to oppose this system. But this:
> All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says.
Is still complete bullshit. No part of it is true or an extrapolation of known facts. It’s neither true as a technical possibility not even as a legal one.
Quoting from Edward Snowden: "I intentionally wave away the technical and procedural details of Apple’s system here, some of which are quite clever, because they, like our man in the handsome suit, merely distract from the most pressing fact—the fact that, in just a few weeks, Apple plans to erase the boundary dividing which devices work for you, and which devices work for them."
So the details really don't matter as much as the fact that this creates a huge precedent. And, quoting Snowden again, "There is no fundamental technological limit to how far the precedent Apple is establishing can be pushed, meaning the only restraint is Apple’s all-too-flexible company policy, something governments understand all too well." (emphasis mine)
So I might read through that PDF, I might even understand it, but what it can't dissuade me from is a persuasion that yes, governments can and will take advantage of this turn of events, and no, there is no technological or legal barrier that can keep them from doing so once this is in place.
>1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.
Yes... Because it is impossible for different servers to be configured as backends depending on where handsets are destined to be sold. It's not like it's possible to quickly whip up a geolocation aware API that can swap things out on the fly. C'mon. This isn't even hard. These are all trivially surmountable problems. The one thing standing in the way of already having done this, was there was no way in hell anyone would have been daft enough to even try doing something like this with a straight face. For heaven sake, even Hollywood lampshaded it with that "using cell phones as broadband sensors" shtick in the Dark Knight or whatever it was.
>This is a political fight, and if we all act like ideologues we're going to lose it all in the end.
The fight was lost the moment someone caved to "think of the children". Every last warning sign that has been left all over the intellectual landscape ignored, history, risk, human nature, any semblance of good sense ignored.
Honestly, I'm sitting here scratching my head wondering if I took a wrong turn or something 20 years ago. This isn't even close to the place I lived anymore.
> Higgins correctly highlights the immense pressure Apple will get from governments and other actors to bend the technology and use it for something else than csam. It will happen, people are probably already thinking how to apply such pressure. Sooner or later Apple will cave in and they will have only themselves to blame when freedom supports in Sudan or LGBTQ activists in Saudi Arabia will be jailed.
I'm having trouble seeing how Apple's actions make this any more or less likely. It's not like matching photos is some esoteric concept that no repressive government has ever thought of before. It's not even like it's particularly hard. Apple's implementation is the most privacy sensitive way of doing it, but if the rules were going to come down they were going to come down, and they'd be implemented in less privacy sensitive ways.
Apple can also choose to exit that country. There aren't many countries which Apple would even consider risking its global reputation in order to retain that market. US, China, Europe, maybe the UK. That's about it.
If Saudi Arabia or Sudan tried to turn the screws, the business case for Apple is absolutely clear-cut: they leave. This isn't even up for debate. There's far too much at risk globally than there is to gain domestically from compliance.
Not only do they avoid serious damage to their global reputation (something they'll be extremely sensitive to, as the last two weeks have taught them) it would represent a massive opportunity for Apple to earn weeks of free media coverage that aligns with their security narrative.
Since it’s speculation at this point, I’d say we deal with it if and when it happens. I also don’t see how this doesn’t apply even worse to doing cloud side scanning, like companies otherwise do. Is that out of control? Is there any evidence of that?
> 4. Can Postal Inspectors open mail if they feel it may contain something illegal?
First-Class letters and parcels are protected against search and seizure under the Fourth Amendment to the Constitution, and, as such, cannot be opened without a search warrant. If there is probable cause to believe the contents of a First-Class letter or parcel violate federal law, Postal Inspectors can obtain a search warrant to open the mail piece. Other classes of mail do not contain private correspondence and therefore may be opened without a warrant.
Companies hosting your data aren't similarly restricted (though if the US government wants access then they'd be restricted by the Constitution and legislation again). A company's ability to look at whatever you give them is only restricted by your contract with them and the technical limitations created by how you share it (upload encrypted files where they don't have the key? they can't really do much). They may have some legal restrictions on some kinds of data, but it's not going to be uniform across the globe so you'll have to take care with which companies you choose to host your unencrypted data.
I realise that you're talking about global Governments, but when it comes to the United States, Government pressure cannot compel Apple to expand the on-device searching because that would be an unequivocal violation of the 4th Amendment of the US Constitution. Because it is a search of your private property compelled by the Government.
(After a photo is uploaded to a cloud service, a search of photos stored on servers doesn't enjoy the same 4A protection as this falls under the so-called "third party doctrine".)
(Apple searching for CSAM is also not a 4A violation because it was Apple's free choice as a private company to do so, and you will have agreed to it as part of the Terms of Service of the next version of iOS.)
Western governments had enough room to select a different way. Instead we are captured by a destructive war on terror surveillance ambitions. Do you think those defense and security contractors will ever go away? That they won't summon dangers if they have nothing else to do?
Here governments could have opted for a contrast but they neglected these opportunities.
It seems like big techs are taking the place of court in some domains. They draw the lines and execute the rules. Even more, it is almost impossible to know where are the real lower and upper bound of rules. There are already some examples (e.g. app is banned for unknown/vague reasons).
- The main issue is that your private key is supposed to be secret – not uploaded to a server you don't control. Of course Protonmail encrypts it, but passphrases are supposed to be an additional layer of security, not the only one. If Protonmail has a data breach, is compelled to surrender your keys or turns out to be untrustworthy, your messages are only as secure as your password.
- You cannot control when a web app is updated or verify that everyone else got the same update. So Protonmail – or an attacker that took control of their systems – could give you an update that gives them your unencrypted keys. That may be mostly a theoretical issue because few people do that with their local software either. Still, I'd trust the Debian/Ubuntu repositories more.
- Web apps have additional attack surfaces compared to local software. Malicious browser extensions can't access the data of local software, nor is local software suspectible to things like XHR attacks.
They might, but it will be some time before privacy authorities build a case against Zoom and start fining them.
In the meanwhile, imho they are under so much pressure to behave that in the next few months might really make some progress in this field. I mean, right now Zoom is the most independently "audited" video conferencing app in the world and many newspapers and state attorneys are investigating [1].
Paid user here, so I am not affected by this problem.
However (writing this complaint here since I am sure they are monitoring this 3d) I am really disappointed by the way agilebits handled this matter.
1) the release notes were shitty and they know it;
2) no one asked for a free app but if you do that, taking it away is a baaaaaad idea;
3) it took me a lot of time to convince family members to use local vaults on their phones, if the feature is removed and they will complain with me I will be extremely unhappy.