Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the en- crypted database present on their device, and compare it to the expected root hash in the Knowledge Base article.

This is just security theater, they already sign the operating system images where the database reside. And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

> This feature runs exclusively as part of the cloud storage pipeline for images being up- loaded to iCloud Photos and cannot act on any other image content on the device

Until a 1-line code change happens that hooks it into UIImage.



> And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

Although this is true, the same argument already applies to "your phone might be scanning all your photos and stealthily uploading them" -- Apple having announced this program doesn't seem to have changed the odds of that.

At some point you have to trust your OS vendor.


Which is why I am confused by a lot of this backlash. Apple already controls the hardware, software, and services. I don't see why it really matters where in that chain the scanning is done when they control the entire system. If Apple can't be trusted with this control today, why did people trust them with this control a week ago?


Because people (HN especially) overestimate how easy it is to develop and deploy "stealth" software that is never detected in a broad way.

The best covert exfiltration is when you can hit individual devices in a crowd, so people already have no reason to be suspicious. But you're still leaving tracks - connections, packet sizes etc. if you actually want to do anything and you only need to get caught once for the game to be up.

This on the other hand is essentially the perfect channel for its type of surveillance...because it is a covert surveillance channel! Everyone is being told to expect it to exist, that it's normal, and that it will receive frequent updates. No longer is their a danger a security researcher will discover it, its "meant" to be there.


This is the conclusion that I personally arrived at. When I confronted some friends of mine with this, they gave me some good points to the contrary.

Letting Apple scan for government-related material on your device is a slippery slope to government surveillance and a hard line should be drawn here. Today, Apple may only be scanning your device for CSAM material for iCloud. Tomorrow, Apple may implement similar scanning elsewhere, gradually expand the scope of scanning to other types of content and across the entire device, implement similar government-enforcing processes on the device, and so on. It's not a good direction for Apple to be taking, regardless of how it works in this particular case. A user's device is their own personal device and anything that inches toward government surveillance on that device, should be stopped.

Another point made was that government surveillance never happens overnight. It is always very gradual. People don't mean to let government surveillance happen and yet it does because little things like this evolve. It's better to stop potential government surveillance in its tracks right now.


Yeah. I will say, though, I am happy that people are having the uncomfortable realization that they have very little control over what their iPhone does.


Now we just need everyone to have that same realization about almost all the software we use on almost all the devices we own. As a practical matter 99.99% of us operate on trust.


Remember that emission cheating scandal? Where we supposedly had a system in place to detect bad actors and yet it was detected by a rare case of some curious student exploring how things work or some such.


Which is why open(-ish[0]) things are good, because people can get curious and see how they work.

[0] I understand the emissions cheating was not supossed to be open, but a relatively open system allowed said student to take a peek and see what was going on.


That’s where I’m at. They could have just started doing this without even saying anything at all.


But in that case they would eventually be caught red-handed and won't get to do the "for the children" spiel and get it swept under the rug like it's about to be.


The goal is not for it to be swept under the rug. The goal is for it to deflect concerns over the coming Privacy Relay service.


The government cares far more about other things than CSAM, like terrorism, human and drug trafficking, organized crime, and fraud. Unless the CSAM detection system is going to start detecting those other things and report them to authorities, as well, it won't deflect any concerns over encryption or VPNs.


Their private relay service appears orthogonal to CSAM… it won’t make criminals and child abusers easier or harder to catch, and it doesn’t affect how people use their iCloud Photos storage.


These people are commonly prosecuted using evidence that includes server logs that showing their static IP address.

Read evidence from past trials it is obvious. See also successful and failed attempts to subpoena this info from VPN services.

Only people with iCloud will be using the relay.

It is true on the surface the photos is disconnected from the use. However, Apple only needs a solid answer that handles the bad optics of what you can do with the Tor-like anonymity of iCloud Privacy Relay.

However, if you look more closely, the CSAM service and its implementation are crafted exactly around the introduction of the relay.


Agreed. I think this just shined a spotlight for a lot of people who didn’t really think about how much they had to trust Apple.


They risk a whistleblower if they don't announce this news while implementing that in a country with a free press.

It's better to be forthright, or you risk your valuation on the whims of a single employee.


What happens if someone tries to coerce Apple into writing backdoor code? Engineers at Apple could resist, resign, slow roll the design and engineering process. They could leak it and it would get killed. Things would have to get very very bad for that kind of pressure to work.

On the other hand, once Apple has written a backdoor enthusiastically themselves, it's a lot easier to force someone to change how it can be used. The changes are small and compliance can be immediately verified and refusal punished. To take it to its logical extreme: you cannot really fire or execute people who delay something (especially if you lake the expertise to tell how long it should take). But you can fire or execute people who refuse to flip a switch.

This technology deeply erodes Apple and its engineers' ability to resist future pressure. And the important bit here is there adversary isn't all powerful. It can coerce you to do things in secret, but its power isn't unlimited. See what happened with yahoo.[0]

https://www.reuters.com/article/us-yahoo-nsa-exclusive/exclu...


If you're uploading to the cloud, you have to trust a lot more than just your OS vendor (well, in the default case, your OS vendor often == your cloud vendor, but the access is a lot greater once the data is on the cloud).

And if your phone has the capability to upload to the cloud, then you have to trust your OS vendor to respect your wish if you disable it, etc.

It's curious that this is the particular breaking point on the slope for people.

The "on device" aspect just makes it more immediate feeling, I guess?


Yes, you had to trust Apple, but the huge difference with this new thing is that hiding behind CSAM gives them far more (legally obligated, in fact --- because showing you the images those hashes came from would be illegal) plausible deniability and difficulty of verifying their claims.

In other words, extracting the code and analysing it to determine that it does do what you expect is, although not easy, still legal. But the source, the CSAM itself, is illegal to possess, so you can't do that verification much less publish the results. It is this effective legal moat around those questioning the ultimate targets of this system which people are worried about.


Surely they could do their image matching against all photos in iCloud without telling you in advance, and then you'd be in exactly the same boat? Google was doing this for email as early as 2014, for instance, with the same concerns about its extensibility raised by the ACLU: https://www.theguardian.com/technology/2014/aug/04/google-ch...

So in a world where Apple pushes you to set up icloud photos by default, and can do whatever they want there, and other platforms have been doing this sort of of thing for years, it's a bit startling that "on device before you upload" vs "on uploaded content" triggers far more discontent?

Maybe it's that Apple announced it at all, vs doing it relatively silently like the others? Apple has always had access to every photo on your device, after all.


It isn't startling people trust they can opt out of iCloud photos.


If you trust that you can opt out of iCloud Photos to avoid server-side scanning, trusting that this on-device scanning only happens as part of the iCloud Photos upload process (with the only way it submits the reports being as metadata attached to the photo-upload, as far as I can tell) seems equivalent.

There's certainly a slippery-slope argument, where some future update might change that scanning behavior. But the system-as-currently-presented seems similarly trustable.


I trust Apple doesn't upload everyone's photos despite opting out because it would be hard to hide.


I bet it'd take a while. The initial sync for someone with a large library is big, but just turning on upload for new pictures is only a few megabytes a day. Depending on how many pictures you take, of course. And if you're caught, an anodyne "a bug in iCloud Photo sync was causing increased data usage" statement and note in the next iOS patch notes would have you covered.

And that's assuming they weren't actively hiding anything by e.g. splitting them up into chunks that could be slipped into legitimate traffic with Apple's servers.


Yeah, it's weird. Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me. But this is clearly not a universal opinion.

The most-optimistic take on this I can see is that this program could be the prelude to needing to trust less people. If Apple can turn on e2e encryption for photos, using this program as the PR shield from law enforcement to be able to do it, that'd leave us having to only trust the OS vendor.


> Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me.

What I find interesting is that so many people find it worse to do it on device, because of the risk that they do it to photos you don't intend to upload. This is clearly where Apple got caught off-guard, because to them, on-device = private.

It seems like the issue is really the mixing of on-device and off. People seem to be fine with on-device data that stays on-device, and relatively fine with the idea that Apple gets your content if you upload it to them. But when they analyze the data on-device, and then upload the results to the cloud, that really gets people.


Is this really surprising to you? I'm not trying to be rude, but this is an enormous distinction. In today's world, smartphones are basically an appendage of your body. They should not work to potentially incriminate its owner.


They should not work to potentially incriminate its owner.

But that ship has long sailed, right?

Every packet that leaves a device potentially incriminates its owner. Every access point and router is a potential capture point.


When I use a web service, I expect my data to be collected by the service, especially if it is free of charge.

A device I own should not be allowed to collect and scan my data without my permission.


A device I own should not be allowed to collect and scan my data without my permission.

It's not scanning; it's creating a cryptographic safety voucher for each photo you upload to iCloud Photos. And unless you reach a threshold of 30 CSAM images, Apple knows nothing about any of your photos.


From the point of view of how image processing works, what is happening can indeed be called “scanning”.


This seems like a necessary discussion to have in preparation for widespread, default end to end encryption.


Them adding encrypted hashes to photos you don’t intend to upload is pointless and not much of a threat given the photo themselves are they. They don’t do it, but it doesn’t feel like a huge risk.


No, the threat model differs entirely. Local scanning introduces a whole host of single points of failure, including the 'independent auditor' & involuntary scans, that risk the privacy & security of all local files on a device. Cloud scanning largely precludes these potential vulnerabilities.


Your phone threat model should already include "the OS author has full access to do whatever they want to whatever data is on my phone, and can change what they do any time they push out an update."

I don't think anyone's necessarily being too upset or paranoid about THIS, but maybe everyone should also be a little less trusting of every closed OS - macOS, Windows, Android as provided by Google - that has root access too.


Sure, but that doesn't change the fact that the vulnerabilities with local scanning remain a significant superset of cloud scanning's.

Apple has built iOS off user trust & goodwill, unlike most other OSes.


Cloud Scanning vulnerability: no transparency over data use. On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data. On the cloud, anything about your photos is fair game.

Where does that fit in your set intersection?


> On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data.

...except you can't? Not sure where these assumptions come from.


It’s code running your device is the point, so while “you” doesn’t include everyone, it does include people who will verify this to a greater extent than if done on cloud.


It differs, but iOS already scans images locally and we really don't know what they do with the meta data, and what "hidden" categories there are.


Yes, exactly why Apple breaching user trust matters.


And how is telling you in great detail about what they’re planning to do months before they do it and giving you a way to opt out in advance a breach of trust? What more did you expect from them?


> What more did you expect from them?

Well they could not do it.


You might prefer that, but it doesn’t violate your privacy for them to prefer a different strategy.


why even ask the question " What more did you expect from them?" if you didn't care about the answer?

I gave a pretty obvious and clear answer to that, and apparently you didn't care about the question in the first place, and have now misdirected to something else.

I am also not sure what possible definition of "privacy" that you could be using, that would not include things such as on device photo scanning, for the purpose of reporting people to the police.

Like, lets say it wasn't Apple doing this. Lets say it was the government. As in, the government required every computer that you own, to be monitored for certain photos, at which point the info would be sent to them, and they would arrest you.

Without a warrant.

Surely, you'd agree that this violates people's privacy? The only difference in this case, is that the government now gets to side step 4th amendment protections, by having a company do it instead.


My question was directed at someone who claimed their privacy was violated, and I asked them to explain how they would’ve liked their service provider to handle a difference in opinion about what to build in the future. I don’t think your comment clarifies that.


> how they would’ve liked their service provider to handle a difference in opinion about what to build in the future

And the answer is that they shouldn't implement things that violate people's privacy, such as things that would be illegal for the government to do without a warrant.

That is the answer. If it is something that the government would need a warrant for, then they shouldn't do it, and doing it would violate people's privacy.


You forgot, 'after it leaked'


It’s almost certain the “leak” was from someone they had pre-briefed prior to a launch. You don’t put together 80+ pages of technical documentation with multiple expert testimony in 16 hours.


'Almost certain'? Have you heard of contingency planning?


What’s the difference between hybrid cloud/local scanning “due to a bug” checking all your files and uploading too many safety vouchers and cloud scanning “due to a bug” uploading all your files and checking them there?


...because cloud uploads require explicit user consent, practically speaking? Apple's system requires none.


Wouldn't both of those scenarios imply that the "bug" is bypassing any normal user consent? They're only practically different in that the "upload them all for cloud-scanning" one would take longer and use more bandwidth, but I suspect very few people would notice.


I think the difference lies in the visibility of each system in typical use. Apple's local scanning remains invisible to the user, in contrast to cloud uploading.


[flagged]


Ditto, too bad you got flagged earlier


What about trust-but-verify ?

If the OS was open source and supported reproducible builds, you would not have to trust them, you could verify what it actually does & make sure the signed binaries they ship you actually correspond to the source code.

Once kinda wonders what they want to hide if they talks so much about user privacy yet don't provide any means for users to verify their claims.


Yes, they can technically already do so, but that is not the question. The question is what can they legally do and justify with high confidence in the event of a legal challenge.

Changes to binding contractual terms that allow broad readings and provide legal justification for future overreach are dangerous. If they really are serious that they are going to use these new features in a highly limited way then they can put their money where their mouth is and add legally binding contractual terms that limit what they can do with serious consequences if they are found to be in breach. Non-binding marketing PR assurances that they will not abuse their contractually justified powers are no substitute for the iron fist of legal penalty clause.


Yeah that's true, although to do some sort of mass scanning stealthily they would need a system exactly like what they built with this, if they tried to upload everything for scanning the data use would be enormous and give it away.

I guess it comes down to that I don't trust an OS vendor that ships an A.I. based snitch program that they promise will be dormant.


Speaking cynically, I think that them having announced this program like they did makes it less likely that they have any sort of nefarious plans for it. There's a lot of attention being paid to it now, and it's on everyone's radar going forwards. If they actually wanted to be sneaky, we wouldn't have known about this for ages.


They'd have to be transparent about it as someone would easily figure it out.You have no way of verifying the contents of that hash database. once the infrastructure is in place (i.e. on your phone) it's a lot easier to expand on it. People have short memories and are easily desensitized, after a year or two of this, everyone will forget and we'll be in uproar about it expanding to include this or that...


You're making the mistake of anthropomorphizing a corporation. Past a certain size, corporations start behaving less like people and more like computers, or maybe profit-maximizing sociopaths. The intent doesn't matter, because 5 or 10 years down the line, it'll likely be a totally different set of people making the decision. If you want to predict a corporation's behavior, you need to look at the constants (or at least, slower-changing things), like incentives, legal/technical limitations, and internal culture/structure of decision-making (e.g. How much agency do individual humans have?).


I feel that I was stating the incentives, though.

This being an area people are paying attention to makes it less likely they'll do unpopular things involving it, from a pure "we like good PR and profits" standpoint. They might sneak these things in elsewhere, but this specific on-device-scanning program has been shown to be a risk even at its current anodyne level.


No they wouldn’t need a system like this. They already escrow all your iCloud Backups, and doing the scanning server side allows you to avoid any scrutiny through code or network monitoring.


> At some point you have to trust your OS vendor.

Yes, and we were trusting Apple. And now this trust is going away.


> now this trust is going away

Is it really? There are some very loud voices making their discontent felt. But what does the Venn diagram look like between 'people who are loudly condemning Apple for this' and 'people who were vehemently anti-Apple to begin with'?

My trust was shaken a bit, but the more I hear about the technology they've implemented, the more comfortable I am with it. And frankly, I'm far more worried about gov't policy than I am about the technical details. We can't fix policy with tech.


> I'm far more worried about gov't policy than I am about the technical details. We can't fix policy with tech.

Yeah. I don't really understand the tech utopia feeling that Apple could simply turn on e2ee and ignore any future legislation to ban e2ee. The policy winds are clearly blowing towards limiting encryption in some fashion. Maybe this whole event will get people to pay more attention to policy...maybe.


> Until a 1-line code change happens that hooks it into UIImage.

I really don't understand this view. You are using proprietary software, you are always an N-line change away from someone doing something you don't like. This situation doesn't change this.

If you only use open source software and advocate for others to do the same, I would understand it more.


Did you verify all the binaries that you run are from compiled source code that you audited? Your BIOS? What about your CPU and GPU firmware?

There is always a chain of trust that you end up depending on. OSS is not a panacea here.


It's not a panacea but the most implausible the mechanism, the less likely it's going to be used on anyone but the most high value targets.

(And besides, it's far more likely that this nefarious Government agency will just conceal a camera in your room to capture your fingers entering your passwords.)



> I really don't understand this view. You are using proprietary software, you are always an N-line change away from someone doing something you don't like. This situation doesn't change this.

And I don't understand why it has to be black and white, I think the N is very important in this formula and if it is low that is a cause for concern. Like an enemy building a missile silo on an island just off your coast but promising it's just for defense.

All arguments I see is along the lines of "Apple can technically do anything they want anyways so this doesn't matter". But maybe you're right and moving to FOSS is the only solution long-term, that's what I'm doing if Apple goes through with this.


I’d leave this one out to the lawyers. I’m not one but I don’t think that the court will evaluate the number of lines of code required for help.


The size of N doesn't really matter. I'm sure Apple ships large PRs in every release, as any software company does.


Maybe not if you assume Apple is evil but for the case of Apple being good intentioned but having its hand forced, they will have a much harder time resisting a 1 line change than a mandate to spend years to develop a surveillance system


Apple shipped iCloud Private Relay which is a “1-line code change that hooks into CFNetwork” away from MITMing all your network connections, by this standard.


For me the standard is that I don't want any 1-line code change between me and near-perfect Orwellian surveillance.


Since your one-liners seem to be immensely dense with functional changes, I can’t understand how you trust any software.


Any connection worth its salt should be TLS protected.


Also in CFNetwork. Probably a one line change to replace all session keys with an Apple generated symmetric key.


> And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

They describe a process for third parties to audit that the database was produced correctly.


Do we have any idea how the NCMEC database is curated? Are there cartoons from Hustler depicting underage girls in distress? Green text stories stating they are true about illegal sexual acts? CGI images of pre-pubescent looking mythical creatures? Manga/Anime images which are sold on the Apple Store? Legitimate artistic images from books currently sold? Images of Winnie the Pooh the government has declared pornographic? From the amount of material the Feds claim is being generated every year I would have to guess all of this is included. The multi-government clause is completely pointless with the five-eyes cooperation.

The story here is that there is a black box of pictures. Apple will then use their own black box of undeclared rules to pass things along to the feds which they have not shared what would be considered offending in any way shape or form other than "we will know it when we see it". Part of the issue here is that Apple is taking the role of a moral authority. Traditionally Apple has been incredibly anti-pornography and I suspect that anything that managed to get into the database will be something Apple will just pass along.


Apple is manually reviewing every case to ensure it’s CSAM. You do have to trust them on that.

But if your problem is with NCMEC, you’ve got a problem with Facebook and Google who are already doing this too. And you can’t go to jail for possessing adult pornography. So even if you assume adult porn images are in the database, and Apple’s reviewers decide to forward them to NCMEC, you would still not be able to be prosecuted, at least in the US. Ditto for pictures of Winnie the Pooh. But for the rest of what you describe, simulated child pornography is already legally dicey as far as I know, so you can’t really blame Apple or NCMEC for that.


Facebook I completely approve of. You are trafficking data at that point if you are posting it. I just recall the days of Usenet and Napster when I would just download at random and sometimes the evil would mislabel things to cause trauma. I do not download things at random any more but when I was that age it would have been far more appropriate to notify my parents then it would be to notify the government.

In any case it is likely the government would try to negotiate a plea to get you into some predator database to help fill the law enforcement coffers even if they have no lawful case to take it to court once they have your name in their hands.


> Ditto for pictures of Winnie the Pooh.

References to Winnie the Pooh in these discussions are about China, where images of Winnie are deemed to be coded political messages and are censored.

The concern is that Apple are building a system that is ostensibly about CSAM, and that some countries such as China will then leverage their power to force Apple to include whatever political imagery in the database as well. Giving the government there the ability to home in on who is passing around those kinds of images in quantity.

If that seems a long way indeed from CSAM, consider something more likely to fit under that heading by local government standards. There's a country today, you may have heard of, one the USA is busy evacuating its personnel from to leave the population to an awful fate, where "female teenagers in a secret school not wearing a burqa" may be deemed by the new authorities to be sexually titillating, inappropriate and illegal, and if they find out who is sharing those images, punishments are much worse than mere prison. Sadly there are a plethora of countries that are very controlling of females of all ages.


Drawings are prosecutable in many countries including Canada, the UK, and Australia. Also, iCloud sync is enabled by default when you set up your device, whereas the Facebook app at least is sandboxed and you have to choose to upload your photos.


> You do have to trust them on that.

If this system didn't exist, nobody would have to trust Apple.

> you would still not be able to be prosecuted

But I wouldn't want to deal with a frivolous lawsuit, or have a record in the social media of being brought CSA charges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: