As always with patches to something of this level (if it is indeed Pegasus related say) it's important to note that if this was a rarified targeted-use exploit before it won't stay that way for long. Now that Apple has released a patch for it widespread reverse engineering will begin immediately and it'll only be a matter of time until packaged exploits become part of standard mass-use toolkits. Having a patch ready to deploy is great, but simultaneously means it's all the more important to get it deployed fairly promptly if it's something that could have serious root/remote execution potential.
Though I suppose if this bug can be used for a jailbreak there may be some people who'd actively want to stay on 14.7 as well. It's too bad on iOS Apple forces people to choose between security and control of their own systems and doesn't at least allow a purchase-time option to have the ability to load ones own root signing certificate.
> Though I suppose if this bug can be used for a jailbreak there may be some people who'd actively want to stay on 14.7 as well. It's too bad on iOS Apple forces people to choose between security and control of their own systems and doesn't at least allow a purchase-time option to have the ability to load ones own root signing certificate.
That exactly why the jailbreak scene has no incentive to share any exploit with Apple and is obfuscating everything. That's not great for security but that's a direct consequence of Apple's policies.
Correct. The issue is that Apple takes security systems designed as a defense against local attackers and uses them as a bulwark against their own customers.
Come to think of it, maybe that could be the basis of a right-to-repair law: companies which sell hardware products that restrict functionality or access to those who know some secret value must divulge those secrets to the device owner upon request at the time of purchase.
The only issue I can think of with a law like that is that it'd make DRM significantly less effective, though IMO that's not necessarily a bad thing.
That's the hard part for all of this: if you are building an unlock mechanism the first question you need to ask is how you build a UI which clearly communicates to a user that they are likely giving control of all of their data, location/camera/microphone, etc. to whatever they're installing.
The scammers who push malware under the guise of tech support, free porn/games, etc. are effective enough to compromise millions of people and that's before you get to the question of what it'd look like if a government started pushing access for monitoring. How many people might consider installing something which this guy they met in a coffeeshop says will protect their messages from government surveillance? Now consider how many people might have malware installed by an abusive domestic partner, and where control of the device would extend to hiding the existence of spyware.
This is not to say that Apple is acting without self-interest here, only that I think there's really a pretty nasty market failure making it quite difficult to reconcile someone being able to make choices about their device with a fairly high risk of compromise with potentially significant consequences.
> if you are building an unlock mechanism the first question you need to ask is how you build a UI which clearly communicates to a user that they are likely giving control of all of their data, location/camera/microphone, etc. to whatever they're installing.
Facebook puts a large "Stop!" warning in the browser console when you try to open developer tools with a link to https://www.facebook.com/selfxss. I'd be really curious to see if that's helped to stop such attacks on their side.
Full text: "This is a browser feature intended for developers. If someone told you to copy-paste something here to enable a Facebook feature or "hack" someone's account, it is a scam and will give them access to your Facebook account."
Facebook themselves convinced teenagers to side-load an iOS app that used Facebook’s enterprise cert to circumvent Apple’s privacy protections. They got caught and Apple revoked the cert, which broke all their internal iOS apps.
If Facebook is willing to fool kids into giving up their privacy via side-loading, you can imagine why Apple is nervous about how things would go if anyone could easily do so.
> That's the hard part for all of this: if you are building an unlock mechanism the first question you need to ask is how you build a UI which clearly communicates to a user that they are likely giving control of all of their data, location/camera/microphone, etc. to whatever they're installing.
Nah, thats a win2k era cop-out. The hard part is balancing secure by default and granting privs only when absolutely necessary (and authenticated)
It’s recognizing that giving up control of the trusted base means giving up control of the device. If that option is available, people will be tricked or coerced into using it, or it will be done without their knowledge – as evidenced by all of that being done not uncommonly on desktops.
I think a better angle is pushing for relaxing software distribution: keep the trust chain but let people run apps in the normal sandbox even if those haven’t gone through the current process. Exploits are still possible, of course, but the normal controls allowing user awareness of the device’s status would be intact.
The solution is to not make people have to root their device for that to begin with. Allow people to install apps within the sandbox with minimal static scanning and perhaps a DB of hashes for security notifications later, and let that be it. You'll crush the market for easily accessible Jb tools too
> Allow people to install apps within the sandbox with minimal static scanning and perhaps a DB of hashes for security notifications later, and let that be it.
We know exactly what happens when you do this. Less savvy users will be tricked into installing malware.
For instance, a couple of months ago:
>FlixOnline App Spreads Android Malware by Promising Free Netflix
I think that’s the right direction, too. It doesn’t help the people who have an ideological position that they should be able to do anything with the device but it seems more practical.
> ideological position that they should be able to do anything with the device
While there are people who make this argument purely on ideological grounds (similar to arguments you hear about individual freedoms vs collective rights), it's essential to recognize that _completely_ removing the ability of independent developers to write and run software if the manufacturer has decided they don't like that developer will slowly destroy the competition, creativity and freedom that created most of the technologies that are used today.
It's reasonable to make the argument that the manufacturer needs to secure the devices that they sell, even for users with low technical literacy.
Advocating for the manufacturers to be given total control over everything that every user can do with their device won't guarantee increased security, but it certainly would result in manufacturers being able to disable any software they chose to, regardless of legitimacy, without reason or recourse.
That's a prospect that is (IMO) far more terrifying than what it could prevent (some users falling for certain types of phishing attacks that install spyware).
> That's a prospect that is (IMO) far more terrifying than what it could prevent (some users falling for certain types of phishing attacks that install spyware).
I disagree with that conclusion (the latter scenario happens frequently and has lead to significant consequences, including death) but completely agree that it's not a good situation that the alternative is giving a couple of companies control over who gets to ship software. That's why I described it as a market failure — as a user you're left picking which set of drawbacks is less of a problem for you.
What I'd like is basically opening up the App Store walls: allow users to enable third-party stores but everyone runs their apps inside the same sandbox, and the OS vendor retains some global kill switch for malware but with some level of public oversight.
One edge case for this would be the apps which need special permissions: for example, some cell carriers have special entitlements on iOS which allow their apps to talk to their networks in ways which normally are blocked. Reconciling edge cases like that with multiple stores would require care.
I'm not suggesting that the manufacturer should be creating closed platforms to "secure" things for the user; I'm saying that closed platforms can't guarantee an increase in security but will guarantee the slow erosion of openness in all other platforms.
I'm well aware of how native apps are heavily marketed over their web-app equivalents because companies want to gather more data, get access to push notifications (on iOS at least), etc.
Probably true, but the solution to that must not be to treat the user like a child unable to make decisions for themself.
If a person decides to give up control of their device to another person or organization (like the device manufacturer) fine, but that should be a voluntary decision on the part of the user, not something forced.
> the solution to that must not be to treat the user like a child unable to make decisions for themself
There is a wide gap between being unable to decide for oneself in the general case, like a child, and being unable to decide for oneself due to lack of expert knowledge. Sometimes, less is more. Apple built its brand on this insight.
The old Internet Explorer full of toolbars, and a notification area with an endless amount of background services, kind of proves the point that the general public are children unable to make decisions for themselves.
These toolbars were usually bundled with legit software (Heck, even Java installed Bing toolbar..) behind dark patterns where a computer beginner would, understandably, not realize they were also installing something else.
As long as the dialog for unlocking your phone would clearly state the risks and what it does, unlike those toolbars upon installation, I think it would totally be OK.
Imagine if you couldnt install Linux on any PC because Dell didnt give you the keys to prevent you from hurting yourself. Or Apple didnt allow sudo in macOS. Seems kinda absurd right? Yet this is the standard for phones.
Also Id like to point out that Android allows installation of external APKs (after activating) and it didnt lead to a super increase in malware, as user still prefer the Play Store, and APK are usually left for the more technically inclined user for niche purposes or often times Region lock circumventing.
> and it didnt lead to a super increase in malware
Are you sure about that? I pulled an old, cheap Android phone out of storage box to do some debugging. I needed a quick file manager app, installed one of the top results on Google Play that was baked with some malware that may have bricked the device (didn't bother to try to fix it, it's in the bootloader). Granted this was an old version of Android, but that's still a bit egregious. I can't recall anything like that happening on iOS.
-> "Upon installation (from a third party store, not Google Play Store), the device gets registered with the Firebase Command and Control (C&C) with details such as the presence or absence of WhatsApp, battery percentage, storage stats, the token received from the Firebase messaging service, and the type of internet connection. "
It isn't hard to find similar reports, there are enough of them.
Wanna know the sad thing? When I was…12 or so… I thought all those cool icons were awesome. So I _only_ installed programs that had them…and all those little “desktop buddies” I had maybe five of them. Innocent days gone bye.
What if that person's “decision” was being tricked by a scammer? Or being in an abusive relationship? Or living in a surveillance state?
I want people to have more control over devices but there are some non-trivial threats which millions of people face which are currently defanged by things like the OS not allowing malware installs, or hiding the use of various sensors or data access, etc.
So people might make bad decisions, therefore they should not be allowed to make decisions at all?
Sure, do everything you can to educate the user on what they're doing and make things as hard as possible for scammers, but ultimately it must be their choice who controls their device, not yours.
We don't have absolute freedom in many areas because the law recognizes that the general public either is not capable of making informed decisions when that requires high-levels of skill or when there is a high likelihood of bad outcomes due to malicious actors.
For example, if you buy a car there are safety features you cannot turn off without major modifications. Dangerous tools often have safety guards built-in to the design — for example, my blender doesn't say it's my choice whether to keep the spinning blade covered. My bank doesn't say it's my choice whether or not to identify myself when making a withdrawal.
For most people, iOS limiting the damage when they get compromised is a huge plus. Removing that entirely in the absolute position is bringing people back a couple of decades ago only now the malware will have access to their most private moments and data 24x7 rather than just their Geocities browsing history.
Now, consider what that means for something like unlocking the bootloader. Allowing that means undetectable malware, potentially irrecoverable. There's not really a good way around that and even if you try to inform people at the time they enable it that doesn't help if it's done when the device is out of their control (abusive partner, boss, police, etc.), and every on-screen indicator can be faked by the malware vendor. That leaves options like a physical switch in a prominent location which most people aren't going to want to pay for or see, and are more likely to be used by accident than intentionally.
Really? How have you defined common? Where is that data from? I can't imagine anything more than a tiny fraction of car owners are doing this. Most people are not hacking their cars, building Raspberry Pi controllers for their toilets, or have any desire whatsoever to side load an app on their iPhone. The HN community is not representative of the overall population.
It's common enough that there's a legitimate industry around that and a lot of auto-tuning shops will do it on request. In most cases though it's just editing the built-in data tables rather than completely replacing the firmware, and it's often done for engine performance reasons rather than bypassing safety features.
Mechanics do it often, as do people who work on their cars. It isn't the Hacker News segment that is doing it, but the people who work on cars for a living or for fun.
And you’re free to jailbreak your phone. It’s not illegal. No ones going to arrest you.
But no one says the car manufacturers are required to make a car where you can flash the ECU. There’s just no reason for them to block it. If someone came up with a way to steal your car by tricking your into running an app that flashed your ECU then you can bet Ford would disable it pretty quickly.
> But no one says the car manufacturers are required to make a car where you can flash the ECU. There’s just no reason for them to block it.
Sure there is… if they are providing warranty on the vehicle then they should definitely block ECU mods, as a customer could flash original firmware back after causing damage.
And I imagine you can easily make an engine over-rev or overheat with a modded ECU, perhaps to the point of causing a fire or other catastrophe.
> Dangerous tools often have safety guards built-in to the design
A phone is not, usually, a dangerous tool. Unless you can make it explode :)
> For most people, iOS limiting the damage when they get compromised is a huge plus.
Then just dont unlock. The unlock would need to be done from inside the OS, by the user, not by external actor.
> Now, consider what that means for something like unlocking the bootloader. Allowing that means undetectable malware, potentially irrecoverable. There's not really a good way around that and even if you try to inform people at the time they enable it that doesn't help if it's done when the device is out of their control (abusive partner, boss, police, etc.)
If you have an abusive partner, boss, etc. you're already at risk because they will keep looking at your phone anyway and you're in no position to refuse. A way better solution is to have a burner phone they don't know about.
Agreed with the police point, but I think after getting your phone from police you could just clean install iOS again to make sure nothing unwanted is there.
P.S. On 2nd hand market, you could just as easily reinstall stock iOS once you buy a new phone to prevent buying 2nd hand unlocked iPhone with malware.
What if _you_ didn’t? I outlined scenarios which are real risks for millions of people and a fair fraction of those either involve no or limited user consent. Maybe it’d be possible to have a secure factory reset – this is harder than it might seem with firmware in so many places – but that only helps if you even know that you need to. Anything which allows undetectable rootkits means that a large number of people will never even realize that because phones are general purpose devices and most people are not security experts. Worse, how many of them are actively being betrayed by trusted experts - abusive partners, unethical technicians, etc. - who will say it’s fine?
There probably hasn’t ever been a time with more options to do just that than today.
The reality is it isn’t 1981 and computers aren’t toys or cloistered academic devices anymore. Mass market platforms need to be more secure than 90s era tech allowed.
Next thing, Facebook lures users into side-loading their app. The app won’t provide any warning about the complete mirroring of any user activity, just “stay in contact with your loved ones” and “ooh kittens”. Everyone will install it because of FOMO and network effect; and there goes your privacy.
Eventually also google, Amazon, anyone under the sun will do the same, just in case some aspect of your privacy is not captured.
The only way that sort of corporate behavior by Facebook, etc. is to regulate how data about users can be collected and have real (potentially criminal) penalties for data collection without informed consent.
Technical solutions (like preventing side-loading) cannot possibly eliminate the thing driving the behavior; profit motive for stealthily gathering data to build "better" user profiles to lease to advertisers.
Well, we’re getting there: EU GDPR regulation. But given the inter-jurisdictional nature of the internet, I’m happy to use a device that makes it hard to syphon my personal data and dump it beyond reach
Making it hard to siphon personal data doesn't require manufacturers to prevent side-loading of apps.
A much better solution would be to create better permissions models (capability-based) for mobile devices.
As an example, one could build a granular permissions model which forced apps to be composed of multiple modules. Each module would have their own sets of mutually exclusive permissions.
An app that allowed the user to apply filters to photos/videos could be composed of two modules:
One which had read and create permissions for files the photo library along with read-only access to an app-local storage directory
A module which had the network communications permissions (exclusively by OS APIs that performed all the crypto[1]) and write-only access to the app-local storage directory (for downloading new filters)
By forcing apps to be broken up into separate modules and ensuring that all network communication is done from a carefully constrained sandbox (which can only read/transmit a tiny subset of data generated/available to the app), users could see exactly what the app transmitted without having to install their own self-signed root certificate (and maybe also jailbreak their devices if the app that they're interested in is using certificate pinning or ignoring the OS certificates).
The best way to technically combat a lot of this sort of data-slurping nonsense would be to force apps to have near[1] total transparency for all network communication for any user that wanted to see.
[1] The OS crypto API would have to provide a few authentication routines which blanked passwords, secret keys, etc. after use to prevent the unintentional leaking of secrets. Those APIs would obviously have to be designed to be resistant to malicious use by Facebook, etc.
Doesn't track at all. If a person wants a device with that sort of freedom they can buy one or fund one. The only reason there aren't devices famous for empowering users is because users don't want to have to deal with that. It's only techbros and people they scare with racist China imagery that care. This 'Must Not' mentality is just entitlement.
The security model is wrong. In order to get access to my own data, I have to prove I am who I say I am to someone who doesn't know me from a bar of soap. It makes no sense to blame the user for that.
> Most security breaches these days are social-engineered.
I don't know why people even bother making statements like that here without backing them up, when it's pretty obvious the first response is going to be to request a source.
Thank you for the link. FWIW, as I understand it, that indicates that social is neither the majority, nor the most common form of breach, but it is increasing over time.
That's the problem with using first hand experience. Unless you're doing a statistical review and your experience is actually collating data, the chance that your experience accurately reflects real world data is wildly dependent on the topic, and with one with as many variables as this, I would expect the chance a statement like that be correct more akin to luck than an correctly inferred results from a relevant subset of data. i.e. What industry you work in, what your actual job is, and what you already know about it is likely to wildly influence what you see. Even someone working in the security industry is likely to see only a subset related to their job/interests.
You should probably go re-examine the summary of findings chart. It's number one as a cause, although not strictly speaking a majority. Also, more to the point, ~85% of breaches were human-involved.
This is a very widely discussed trend in the industry and has been for literally almost a decade. It has lead to the tired trope of the number one hardening recommendation being "get rid of the users" as well as the much more helpful "users can't be expected to participate in their own security, it needs to be done for them transparently".
> You should probably go re-examine the summary of findings chart.
Those are not the same figures on the PDF listed. The PDF presented seems to be from 2019, and I can't tell because your images are presented out of context, but it looks like hey might be from a newer report?
The PDF linked has a summary of findings that shows in figure 4 that 52% of breaches included hacking, and that 33% included social attacks.
Assuming I've found the document you're using as a source[1] (it's nice to have a newer version, but I hardly think it's fitting to ask me to reexamine a summary of findings chart that I was not originally presented with), it does go into more recent information.
That said, figures 20[2] and 21 in the new report do shed some more light on what we're actually talking about, and brings up the difference between "incidents" and "breaches" in this report, and what they mean. They define an incident as something that compromised the integrity, confidentiality or availability of an asset, while a breach is something that results in a confirmed disclosure of information. While phishing (social) is top in breaches as a bit less than 40% and pretexting (social) also makes a good showing in the breach chart (figure 20[2]), the much more expansive category of incidents (figure 21[3]) is overwhelmingly dominated by DoS (hacking) at almost 60%.
While that may seem like splitting hairs, especially since the original comment mentioned breaches, it was in response to and in the context of iPhone users, which is end user security. These reports are about businesses, and note they're following the NAICS standard to categorize victim organizations. They source their data from paid external forensic investigations, direct recording by partners using VERIS, and converting partners existing schema of data into VERIS (paraphrasing Appendix A). I think this discussion is about people and security in general, and I don't think this data is about that, I think it's about companies and organizations.
Now, to be clear, on thinking about this more closely I fully believe that social engineering attacks likely could be the majority of attacks when taking into account end users, and I imagine quite a lot of end user malware and ransomware that is likely installed from email, but I'm not sure, and I'm not sure how much browser exploits account for that, or how much phones change the picture. If we were just considering home systems, the majority I assume are firewalled at this point, I would definitely assume social engineering, but with phones out hopping public ranges through data plans and sharing wifi at many businesses, I'm not sure how that may be changed.
In any case, I think it's completely valid to call out someone that makes a general factual statement such as "Most security breaches these days are social-engineered." without providing that evidence. It's not self evident, and there's a lot of data to look at, as we've seen.
> The only issue I can think of with a law like that is that it'd make DRM significantly less effective, though IMO that's not necessarily a bad thing.
I wonder why movie companies these days still insist on shipping DRM garbage. HDCP is broken on a fundamental level, you can grab strippers for about 90€ on ebay, and BD+ is similarly broken.
It literally has no purpose other than annoying customers who have rooted their Android phones (Netflix refuses to go above 480p) or want legitimate backups of their bought content.
Jailbreaks require 0 days. What’s amazing is that they keep coming out despite the prices being so high and the game already having been played for so long.
Most exploits used in a jailbreak are developed only after a public patch for the underlying bugs appears.
Once the patch is publicly available, an exploit has much less commercial value. (Attackers don’t like having to call their target and tell them not to apply the patch.)
It therefore makes sense to donate such exploits for a jailbreak.
I can’t fully grasp the argument about jail breaking. The device was never intended to be used this way in the first place. The process itself probably also relies on bugs that need to be reported, not harnessed.
I own my device without the need to root it, every feature in the brochure is available to me as a user.
I understand that around the 90s owning anything with an OS also implied the ability to mess with the box of bolts and circuits that run the OS but that’s no longer the case. At least not when it comes to phones.
I mean sure maybe you want a fancy device to play with … get a PCB and do whatever you want with it while your phone is updating.
>The device was never intended to be used this way in the first place.
The manufacturer has no say on the usage, simple as that. That there are better ways to tinker is irrelevant. It doesn't preclude the right of people to tinker with anything they own at their own convenience, including an iPhone if that's what they got.
You are absolutely right. The manufacturer and the law has no say on the usage of a device I own. And still I cannot expect them to support a way of usage that was never intended. So Apple shouldn't be (and isn't) able to sue jailbreakers, but they don't have to make it easy for them.
Apple’s security theater is the pillar of their anticompetitive behaviors. The only users it helps are noobs who will just install whatever. If some serious actor wants your data, the theater doesn’t deter them, as evidenced by this Pegasus. If China wants your data, Apple will hand it over themselves.
It’s all a “think of the children” race-to-the-bottum where rent-seekers abuse fear and sycophanty to monopolize critical infrastructure.
Targeted attacks from nation states aren't in the average person's threat model, though it's definitely one Apple tries (and fails) to tackle.
But Apple's security measures definitely help against virtually anything else. Over-zealous police and border patrol. Script kiddies. General spyware and malware.
I think you're forgetting how bad mobile security was before both Apple and Google started adding these hardening measures to their flagship devices. You really couldn't trust your phone at all. Now, you somewhat can.
You're free to tinker with your John Deere equipment however you want, it just might require you to be smart enough to work around the built-in security measures.
I mean, yes? The OS not being locked down is a feature, like the ring tone being changeable. If you want that feature, buy a phone/tractor with that feature.
I’m not disputing the right to do whatever you want with your phone.
My comment was specifically targeting the fact that jail breaking somehow encourages people to develop means of circumventing the security of devices. This is when this becomes dangerous - suddenly it has the potential to hurt unsuspecting users. Saying that the responsibility is with Apple (or Google) to make this possible feels like a straw man argument - their job is to earn money (no illusions there) and keep users safe, even if it means making it harder to tinker (which also helps them earn money)
>The manufacturer has no say on the usage, simple as that.
Where does this end? Should all devices that have computers attached be capable of being 'rooted'?
You would not have the expectation if for example you purchased an MRI machine, that it should be able to emulate a Game Boy. A phone is also a special purpose computer; and the phone's software is designed to guarantee that those special purposes always work.
In my opinion this is the main thing that this root crowd does not understand; the iPhone is not a general purpose computing device. You should not have bought one if that's what you wanted. The market is full of general purpose computers that can make phone calls.
Arbitrary separating the hardware from the OS is not the same thing as the manufacturer having a say on usage. Like I am free to use my car how I want, but that I may not be able to easily repair it or swap parts like the seats or the engine is another problem.
I feel like Jailbreaking was much more of a thing about 10-12 years ago when the iPhone 3G/3GS and 4 came out. Around that time iOS was still pretty limited in what you could do . So Jailbreaking gave you the ability to:
- Set a background wallpaper
- Reply to SMS in a quick reply window (I still miss BiteSMS)
- Create albums in Photos.
- An "Android Like" (back then) quick toggle.
Nowdays all these features and more are part of iOS, but back then Jailbreaking let you do so many things that Apple hadn't gotten around to including in iOS.
> Around that time iOS was still pretty limited in what you could do
I'd argue it still is.
I can't install a third party web browser that doesn't use WebKit.
I can't select alternative default apps to most apps bundled with the OS.
I still can't unlock the bootloader.
I still can't browse the entire filesystem.
People who espouse the virtues of jailbreaking on iOS tend to forget one big thing... the exploits that are used to jailbreak are also exploits used by malware creators to compromise the system.
The difference between then and now is that back then, the limitations impacted the average user. “Normal” users were interested in jailbreaking because it meant they could get access to still relatively basic apps by today’s standards.
The items you just listed matter only to people like the posters here. We have philosophical differences with the limitations in place today, but back in the heyday, those limitations were just fundamental gaps in functionality available to users.
With the many major new APIs available to devs over the years and with the maturation of the App Store, those original drivers behind jailbreaking just aren’t there for the most part.
In my case, the iPhone users around me bump into barriers that wouldn't be there if iOS didn't keep all data in inaccessible silos, but they don't see them as such. For example, someone wanted to back up all their iMessage conversations, including photos. This turned out to be a pain precisely because I couldn't just copy an "all iMessage data" folder like I could on macOS, but they didn't attribute it to the lack of filesystem access. To them (and most users presumably), it was just a problem of "why doesn't the iPhone let me back up iMessage to my computer easily?".
Does the local backup with iTunes not include iMessage?
I don't know too may users who care only about a specific aspect of their phone when backing up. I do have non-technical friends who do iTunes/local backups because they "don't trust the cloud", but I've never encountered a person trying to do this piecemeal.
For what it's worth, there are decent 3rd party tools like iMazing that are easy to use and do give you more granular control.
Yes, it’s very unfortunate when an app is not available for some reason.
In most cases it’s a valid reason - the app is malicious, poorly made or somehow in conflict with local law. Also imagine if you are an app developer an suddenly your project (on which you’ve spent countless hours of hard work) is unexpectedly leaked as freely available for download and side-loading. I’m not talking big Corps like epic and blizzard … but people like you and me.
Open source apps can be loaded easily, so developers who really want to make their stuff available can still do so and users won’t need to hack their phones to get such apps. But of course this can’t happen en masse.
The affect of piracy is generally overstated and commonly bundled with the (incorrect) belief that a pirate would instead be a paying customer if the piracy option was not available.
The Steam model appears to be way better. I am buying 10x more things on Steam than I will ever have time to use, while I generally avoid Appstore as much as possible for Mac apps (and even for phone apps when there is an alternative)
The psychology of the Steam store is very different from the Appstore.
while I generally avoid Appstore as much as possible for Mac apps
Interesting, I do the opposite. When something is available in the Mac App Store or through other means, I will generally buy it in the App Store. Why? Application sandboxing is mandatory for Mac App Store apps and I strongly prefer that applications are sandboxed by default.
The second, less important, reason is that I can just install the applications with just one click and don't have to download them from different places, look up their serial numbers, etc.
Agreed. I‘ve even rebought some apps on the App Store just for the security and convenience.
That said, I‘m struggling with the constraints Apple puts on the business models of developers, for example when it comes to upgrade pricing. While I have a fair amount of software Mac/iOS subscriptions running (8 by my current count) there‘s definitely software I passed on because it moved to a subscription model.
Factually wrong. You can run apps on iOS without jailbreaking that weren’t approved by Apple, or even ones banned by Apple. This has been true for years. Yes, the required resigning of the app every seven days is a major hindrance, but it is still something you very much CAN do.
> make the process as unpleasant as possible but just so enough that Apple can claim they don't block sideloading
Apple doesn’t claim they don’t block side-loading. Actually I believe they claim the opposite in the Epic case (even though it is possible via a supported process) because it was “necessary to ensure the safety of their user base” or something like that.
> It's disingenuous
Shifting goal posts from “you can’t” to “you can’t without X” once someone points out the first was factually incorrect is considered disingenuous by many too.
> Shifting goal posts from “you can’t” to “you can’t without X” once someone points out the first was factually incorrect is considered disingenuous by many too.
First off, you're making a huge assumption here.
Second, the only goalpost shifting here is your attempt to pass off complex and temporary workarounds intended for developers as resulting in a device that is somehow the same as jailbroken devices. This is incorrect for the obvious reasons pointed out already.
Perhaps you are unfamiliar with jailbreak and didn't know apps installed via jailbreak don't need to be constantly renewed and resigned? Installing apps long term is still something you can't do, as I originally said.
Attempting to pass off a lesser solution that doesn't come close to matching what I originally mentioned and pretending it is the same is dishonest and disingenuous.
Please show me where I was wrong in anything I said. I've shown conclusively that your statement quoted above was wrong, nothing you said here says anything other than you are angry and want to lash out.
> Perhaps you are unfamiliar with jailbreak
Considering I wrote one of the first high traffic'd jailbreaking guides for iOS and have been involved in the JB scene in various capacities (never a key role admittedly) since the very beginning, I'm pretty sure that isn't the case. ;)
That's true but becoming a developer is outside what's being discussed here. Non-developers jailbreak too. Let alone the extra expense and equipment besides the phone itself you're going to need.
As unpopular as I think this opinion is, I can't help but agree. If you want a hackable phone there are literally thousands of different Android phones that you can buy for a fraction of the price that you can tweak/hack to your heart's content.
There’s a saying about cake that seems to apply here.
Again, there are other options for both. Android wear devices are still a thing, and there are multiple options that aren’t SMS that are available to the Android ecosystem.
If you want to live in the Apple ecosystem, that comes with some tradeoffs. Can’t have it both ways.
Actually, there isn't. It's perfectly possible to be able to have a process to request bootloader unlocks from Apple, and also be able to use the ecosystem.
The entire point of the saying that you can't have your cake and eat it too is that, well, it can't be both in your hands and in your stomach. But it's perfectly possible for me to be able to call Apple and get a bootloader unlock code, without any other change to anything.
The general point behind the proverb boils down to "you can't have it both ways" or "you can't have the best of two worlds".
That's essentially what the parent comment is trying to do. When offered the Android ecosystem as an alternative, the counterpoint was to lament the lack of Apple Watch and iMessage support in the Android ecosystem.
The cake thing was in response to you wanting Apple Watch and iMessage…on Android.
This is the definition of wanting it both ways.
I don’t think anyone would argue that it’s technically impossible, but that should not be mistaken with “practically impossible” given Apple’s business model.
I don't want iMessage on Android, I want a hackable phone that supports iMessage too. I realize that's not really possible now, but it's not a logical impossibility, like being alive and dead at the same time.
> Also, with a jailbreak, I own my data. iOS (Android too, I believe) actively prevents some data from being backed up or copied otherwise.
The primary area you see this is with applications which have flagged that data as sensitive so it won't be included on unecrypted backups (which includes iCloud). This is somewhat important for security — for example, you reasonably do not want TOTP seeds floating around invalidating your second factor security model — but it means you have a problem if an application you care about uses that feature in a way you don't anticipate.
Also with a jailbreak, other people own your data too. A jailbreak is an exploit. A hammer can be used to drive in a nail (jailbreak) or it can be used to smash windows (malware.) Same tool in the end.
The very act of jailbreaking is what keeps your system vulnerable to the same exploits used by malicious actors.
How does this let you sideload iOS apps, or run older versions of them? That's like responding “just use Linux“ to a Windows user's complaint about Office.
Sideloading apps is not a feature of iOS. It is also illegal unless the app explicitly allows it in their license. If you need to be able to side load apps, you need a device intended for this purpose.
Regarding data, all user data on iOS is backed up either through iCloud and/or on your Mac.
> I understand that around the 90s owning anything with an OS also implied the ability to mess with the box of bolts and circuits that run the OS but that’s no longer the case. At least not when it comes to phones.
Incorrect.
It is still the case with nearly all Android phones. You can install any OS version, gain full permissions, mess with kernels, modules, frameworks, recoveries, ...
I use my device to scratch my nose who is anyone to tell me how to use my device after I purchase it? A jailbroken iPhone has been useful in so many ways over the years. Currently I am not jailbroken but wish I was just to see all the cool mods I could have access too. There are so many useful tools it blew my mind all the missing features Apple left out. Hell when I bought the phone Rogers sold me a video messaging plan. Sure was surprised to learn the original iPhone didn’t even have native video recording. I was however able to jailbreak and download Cycorder and suddenly I could use my device not as it was intended I guess but super useful to me. Your point of view seems so narrow minded to me.
> "It's too bad on iOS Apple forces people to choose between security and control of their own systems..."
moreover, one of the critical vulnerabilities from a user's perspective is the network connections that a device makes, nominally on behalf of the user, but really on behalf of privacy-forsaking data harvesters like google and facebook (and even apple themselves, to a certain extent).
jailbreaking is one method users can (potentially) regain a modicum of control over their own private information as well as block exfiltration from exploited security vulnerabilities.
> one of the critical vulnerabilities from a user's perspective is the network connections that a device makes, nominally on behalf of the user, but really on behalf of privacy-forsaking data harvesters like google and facebook
Apple provides APIs now on stock iOS to allow an app to use a “VPN profile” to implement a firewall blocking outbound connections, but without requiring a remote server (aka the app applies rules locally to outbound connections). The downside is you have to trust Apple to pass their own traffic through the app’s firewall unlike with a JB, but at least that covers the other examples you gave.
taking note of multiple comments mentioning Pegasus, wanted to flag on the top one: this is very unlikely to be a fix for it.
- reported by anonymous researcher
- if it is related, this would be the 2nd of 2 bugs that would need fixing. The first is the 0-click escape hatch from the Messages sandbox to calling functions on IOMobileFramebuffer
- there is only one bug listed for this release, and it is unlikely Apple would publicly flag #2 of 2 bugs was fixed, but not #1
- extremely tight timeframe from press reports of _a_ bug to a fix deployed, I'd like to think you could get a week turnaround on that, but...in practice, at BigCo, that timeframe would require moving heaven and earth, and yet still choosing to be irresponsible by skipping most of the gates on the release process
There isn’t a separate “messages” from anything else, that’s an illusion Apple marketing encourages. iOS is a shared-tenancy, general-purpose operating system. A vulnerability in any of the core libraries can affect any of the apps (first party or otherwise) and a hostile or compromised application (first party or otherwise) can compromise the entirety of the device and its data.
Whether something is a Messages vulnerability or “something obscure” is just marketing spin.
I think the important part is any person can force data onto your device via messages. If that is exploitable, it‘s a huge oroblem. That‘s why people care.
AFAIK the current exploitable firmware for jailbreaking is 14.3 and I'm genuinely surprised that all these exploits for later versions aren't trickling down into Taurine or unc0ver.
They probably are, and this bug may already have been known. Many jailbreaks are often held back for as many versions as possible, since that can allow the jailbreak to target more iOS versions with a single exploit without Apple quickly blocking it for future versions.
That really doesn’t sound very ethical though. An exploit can literally put people in danger. Holding back on reporting just so a few enthusiasts can fiddle with parts of their phone nobody intended for them to use is plain irresponsible… even antisocial.
Jailbreaking is a bug, not a feature. These people need to realize the abusive relationship they have with their phone company and need to decide which way to go forward. Apple never promised to support jailbroken phones, they heavily and openly police modifications from the get-go.
> These people need to realize the abusive relationship they have with their phone company and need to decide which way to go forward.
What’s the alternative? I’ve got a Pinephone coming in the mail — which I ordered precisely because of these sort of disagreements between me and Apple/Google — but I really doubt it’ll be useful for anything away from home (GPS, photos, etc).
No real alternative currently, as I see it. The main ones track the hell out of you, dark pattern galore, Pine and the other Linux phones are simply not 1.0 yet, and I count AOSP derivatives as aftermarket solutions, because they don't have the engineering power to continue developing the fork, either the AOSP or the hardware, should it come to that. There isn't really a best choice.
I personally opted for an older Samsung flagship, LineageOS without G services, and my own cloud at home. And I'm not counting on this being sustainable to be honest. I might go more Stallman later, buy dumber devices with lots of tradeoffs or just give in to the zeitgeist, and buy the shiniest tracking chip.
Edit: Kudos for getting the Pine - putting the money where the mouth is something that has at least the potential to change.
Right, but you're missing the point that you have to do that intentionally. You're not going to bed at night with iOS 14.x and wake up with iOS 14.x+ because your cable connected itself to the laptop and ran the update. It will however do this if you are on wifi. This is the point the GP felt was not necessary to expand.
Thank you for the correction, my PC isn't supported by iTunes and my iPhone doesn't support 5G so I never knew this. I guess I have to buy a new $1000 phone to get this artibrary restriction lifted.
Apple never came out with an iTunes Linux client? Wow!
Still you have options, load windows in a vm (hint: ms releases a fully functioning vm image every month that works for all major hypervisors). Install iTunes and you’re all set.
I just had to connect to wifi for the update to start, the button was greyed out. If you don't trust me on this, I can send you a screenshot when this happens.
Wait a second, yesterday I misread your comment, is this only available on 5G and not below? What is the reasoning for gating it to newer phones and countries that have that infrastructure, considering my pixel has no issues updating over 4G?
Apple has an agreement with the carriers to not allow iOS updates over pre-5G networks. Carriers used to be able to (possibly still can?) disable things like FaceTime over cellular too.
iOS is hundreds of millions of lines of C and C++, with immeasurable ways to feed in untrusted inputs. In that setting, it's guaranteed that there will be a constant supply of remotely exploitable memory corruption bugs.
The 0-days will continue to drop on a regular basis until OS vendors embrace widespread memory safety as something that's vitally important to more than just a handful of dissidents and journalists.
edit: parent comment previously said "Makes me wonder how secure iOS really is", which is what I was responding to.
Is the implication here that Apple doesn't value memory-safety? Why would you say that?
Apple have created and embraced an entire language (Swift) around the idea that things like memory safety are important, but you cannot just rewrite an entire operating system overnight into a new language.
I didn't say that. Their MacGyvered memory-safe iBoot implementation [1] shows that it's at least a consideration and that they're exploring ways to shore up vulnerable components.
Swift is not (currently) a viable replacement for C.
I'm not saying they should rewrite the entire OS immediately, but: given the pace at which they're able to pump out new marketable features; the massive deployed base of iOS devices; and the sensitivity of the data that is kept on those devices, I would argue that they damn well better have a long-term plan to replace their bug-ridden kernel and other important low-level systems, with code that isn't vulnerable to classes of bugs that have been solved for decades and are frequently exploited in the wild. And I'm not just talking about Apple here.
Is the implication here that Apple doesn't value memory-safety? Why would you say that?
Presumably because they also said
> iOS is hundreds of millions of lines of C and C++
and Apple did write a whole new operating system that way not so long ago despite the well-known safety issues of doing so. If any company in the world has the resources to incrementally rewrite an entire operating system with huge numbers of active users today so that the attack surface becomes progressively smaller, it's Apple.
iOS was a stripped down MacOS. It has evolved since then, but that was the core and much of the user space code. All of that came from NextStep which was based on the Mach kernel and BSD from the late 80s. So, no Apple did not write a whole new operating system in this millenium.
Fair enough, it's true that they didn't start from scratch, but iOS today has evolved far beyond those original borrowed elements and is now much more than just a kernel and some early shared user space code. Even if that were not the case, it is still surprising that a company with Apple's resources would continue to favour writing new parts of iOS the same way 14 years after its first release.
iOS. I realise it feels like smartphones have been ubiquitous forever, but iOS is actually only 14 years old (and obviously most of the code in it is far younger).
It's not as if no-one knew about the dangers of writing security-sensitive systems in C or C++ in 2007 or as if the Internet was some new idea where no-one understood that bad people would try to remotely compromise your system using it. Apple might not yet have realised how important the iPhone was going to become back then but they were doing very well by a decade ago and they were huge by five years ago.
iOS reuses the XNU kernel from macOS, so most of its code is far older than that. And even in 2007, what good options were there for writing kernels? Most of the attempts in memory-safe languages were just toys.
Right, but it's not 2007 any more. Apple has more technical and financial resources available to it than many small countries. Memory-safe languages aren't toys now. Even if they still were, Apple could have built a better one from scratch.
Maybe in 2007 it made sense to build iOS on an existing platform and use the normal systems programming languages. Why couldn't more recent development of potentially vulnerable components have been done using safer technologies though?
Take a look at the long list of security issues fixed by last week's iOS 14.7 update[1] and tell me how many of the vulnerable systems could have been written or rewritten to use more secure technologies. Look for terms like "use-after-free", "buffer overflow" and "out-of-bounds read".
Then there is the iOS 14.7.1 update we're discussing today. Yet again this seems to be about fixing a memory corruption issue in a kernel extension written in an unsafe language that has a track record of exposing vulnerabilities. When do we start learning from those mistakes?
I believe (unsure about this) that it reused a reasonable amount of userspace macOS code as well, perhaps to do fonts and the like? i have to imagine most of e.g. webkit was also taken from macos.
Clearly they don't care enough to force their own employees to use it, which is how we got the absolute utter crap that is a proprietary 802.11 protocol implementation in the kernel and the endless streams of ObjectiveC userland exploits.
Agreed. What makes this worse is that apparently a single component is responsible for so many vulnerabilities over the past ten years. One would hope that over the years this component would’ve gotten thorough reviewing and have hardening added to make it less exploitable.
It's probably not realistic as long as you insist on writing such a complex and externally accessible application in a relatively low-level and error-prone programming language. Remove that constraint and the odds change considerably.
Now notice the companies behind all the major browsers and the companies behind most of the popular recent programming languages are basically the same set, and join the rest of us in wondering how this is still happening.
Why would that be vitally important? They are a business selling lifestyle and technology. For them vitally important is staying in a profitable business. I think we can agree that we're past the point where we can believe that these vulnerabilities threaten that. These vulns mean to them, I think, only risk. Mitigation will come in a form of some patches, and some PR. And the world will go on, continuing to circulate these phones.
Fair question, it's a shame you're being downvoted for it. I do agree that there is no apparent profit motive to do much more than lip service and short-term damage control, although Pegasus might start to change that, we'll see.
It's important in the same way that phasing out ICE cars is important -- important for society, but unlikely to happen in a timely manner without regulation.
Here's a fun strawman: the EU or California could announce limits on the sale of products relative to the number of lines of memory-unsafe code used to build them, starting in a few years.
I agree. That'd be fun! They could introduce a limit and if the product doesn't pass the limit, is outdated or anything, then it gets an ugly Unsafe sticker on the product box. Stickers could be mandated regarding the cloud usage of the device too: This one phones home to China, this one to Russia, this one to the USA. Let the people know.
Counting the number of CVEs to measure the security of a piece of software makes as much sense as counting lines of code to measure a developer’s performance.
I don't agree with this analogy. It may very well point to a fundamental problem with the design of the software, or in methodologies employed in its development. In this case a reasonable inference could be that Apple's memory safety design principles need some re-examination. Hard to tell without being on the inside.
> Someone breaking into my house 10 times by jiggling the same doorknob a contractor told me they “fixed” would be very concerning.
The complexity is not the same. My grandfather is not questionning my ability to fix his computer that “breaks” once in a while just because I already fixed it the last time.
I think the "counterpoint" here is that that makes an amount of sense >0. While it's far from a complete story, and there are plenty of edge cases where it can be misleading, it's also a metric that will tend to bias in the right direction and has a decent relation to what we are trying to measure.
Some researchers get paid per CVE and thus are encouraged to release hundreds of frivolous reports. Some companies publish frivolous ones in order to raise their own profile.
Also the software in question is a complicated subsystem to write on a platform that’s guaranteed to get many eyeballs on. So the fact that those CVEs span roughly once per year might also be a demonstration that there are relatively few bugs (if we knew the number of people and time spent researching vs CVEs published then maybe we’d have a more meaningful statistic).
CVEs do also demonstrate that active research is happening. There are plenty of common libraries out there that never get audited. Does fewer CVEs mean they’re more secure? Or does it just mean that nobody has checked?
Thus on its own, that data is pretty meaningless in terms of deriving a trend.
The reason the LoC comparison was made is because measuring lines of code doesn’t tell you how long a developer has spent debugging, reading documentation, or doing other research required. It doesn’t tell you how secure, performant, or even buggy the code is. All it tells you is the number of lines written and literally nothing more can be derived from that figure. Likewise with CVEs submitted.
Counting CVEs is a trivially easy argument to debunk. It’s a topic well written about by many open source contributors and security researchers. It’s a topic discussed on here frequently too. And it’s a topic that common sense alone should be able to debunk, should one spend a few minutes thinking about it rationally.
However I have written a more in-depth reply about why counting CVEs is meaningless here:
IOMobileFramebuffer is a massive driver (or rather collection of drivers). These days most of the code runs on a coprocessor called DCP, which I am reverse engineering. The firmware on M1 Macs is 7MB. 7MB to put pixels on screens (this does not include the GPU/rendering).
Display controllers are complicated (and Apple ones are especially so, with all their fancy features).
How hard is it to understand that Apple is offering a closed ecosystem, with all of the pluses and minuses that implies? If that isn’t what you want, just don’t buy it. Vote with your $CURRENCY.
Personally, the last thing I’ve got time to worry about is what’s going on in my phone. So I’ve got an iPhone and use basic common sense when choosing what to run on it.
Yes, this is HN but it sure does get old seeing the inevitable complaints about Apple. I’ve been around long enough to know what happens to Apple when they aren’t selling what people want. That isn’t the case today. Maybe get over it?
There isn't any commercial or open-source mobile without security flaws. Given the importance of iOS it's no wonder it's under scrutiny by attackers, just like Windows was (is) while Apple was memeing a virus free Mac OS. That wasn't the case, it's just MacOS was a mostly irrelevant target.
I'm not so sure it's accurate to describe macOS as being an irrelevant target. MacBooks in particular have had a not-insignificant market share among many valuable targets for hackers (software developers, the affluent etc) for at least ten if not fifteen years now.
Also, it I think meming is a bit of a stretch. Of all the desktop operating systems (macOS, Linux, Windows), macOS has had the strongest security push. E.g. integration of the secure enclave throughout the system for protecting secrets, read-only system volume, mandatory sandboxing for App Store apps, and requiring applications to request use of screen capture/camera/mic/address book, etc.
Sure, holes are found. Strong security is a long process of adding more and more boundaries (while giving the ecosystem some time to adapt to those boundaries) and fixing more and more logic/memory errors.
Regardless, macOS is probably the most secure desktop operating system out-of-the-box.
That's whataboutism. Unless there are many more exploits for Android, more than the increase in marketshare warrants, there is no reason to think a closed ecosystem is more secure than an open one.
It is hard to speak about the exploits we don’t know about, but for the ones we do: There are vastly more exploits on Android, evidenced by for no other reason than because handset manufacturers refuse to upgrade.
It is a thing, yes, but it is avoidable nowadays. If it weren't already a thing Google could force Project Treble onto everyone and strong-arm the carriers from the beginning and it would not be an issue.
Apple is voting their $CURRENCY in federal congress, and state congress, and city council meetings every day. They actively try to change the law in their favor, and curtail regulations that make them play fair by bribing senators. They cannibalize companies that try to compete by immense political (money) market power. There is no such thing as a "free market" here.
I'm required to use the devices Apple creates, because work. Once your product is required to be in the hands of many people the idea of "Vote with Money!" is objectively, demonstrably, false. You might as well tell working class people to just walk to work. It affects me. So when someone supports a company like Apple, against all evidence, I'm inclined to further critisize Apple and its users whom advocate against themselves. I'm even more open to harsher regulations on Apple, even. Similiar to how regulations on Gambling are a necessity to prevent heavily negative socioeconomic effects.
This seems reasonable on the surface but it's really not. You're basically arguing that nobody should criticize anything that costs money - that is not a world I want to live in as a consumer.
It also ignores the tragedy of the commons aspect; real world evidence of it is abundant at this point. At the end of the day we need legislation similar to right-to-repair that covers device ownership and control.
Caring about how the devices that the majority of the population use (not just iPhones--all/both major mobile OSes) and how they work or don't work is very much in the HN ballpark.
> How hard is it to understand that Apple is offering a closed ecosystem, with all of the pluses and minuses that implies? If that isn’t what you want, just don’t buy it.
Unfortunately, all my friends and family use iMessage and Facetime, and Apple makes that impossible to access on an Android phone.
Also, Apple buys up huge amounts of the supply chain such that their screens, cameras, etc are better than basically anyone else's, and if I were to switch I'd loose access to every app I've bought over the past decade. (If not for Requiem and TunesKit, I'd loose access to all my movies and TV shows too.)
> Also, Apple buys up huge amounts of the supply chain such that their screens, cameras, etc are better than basically anyone else's,
Care to provide a source? Last I heard, their screens are made by Samsung and LG, and the camera (sensors) by
Sony and Omnivision. None of these companies are owned by Apple.
The GP said that they buy up these other vendor's supply chain making where other device manufactures cannot use them. So if you want the specific features in Apple devices, you have to get the Apple device. I can't imagine this is done by accident by Apple.
That's all true, and at the same time does not invalidate the OPs argument.
You get to choose your phone platform. The implications there include whether or not you get exclusive access to Apple specific features like iMessage and FaceTime. They also include the cost of your existing collection of purchased apps and some DRMed media. You also have to pick your phone from the choices and features of the chosen platform's available devices.
You still have to choose "what you want" and then "Vote with your $CURRENCY." Your "wants" obviously include not just the OS/ecosystem design choices, but your financial investment into your incumbent platform, the convenience of sticking with what you know, your desires/requirements around screens and cameras.
You can totally trade off some of those other requirement over "platform security" or any other differences between iOS and Android. But don't fool yourself that you consider privacy to be critically important, if you're willing to forego it in favour of keeping your hundred or so bucks worth of Android apps or video subscriptions...
>Also, Apple buys up huge amounts of the supply chain such that their screens, cameras, etc are better than basically anyone else's
Apple has never been known to have the fastest/best devices on a feature-comparison point by point. There are better screens, cameras, speakers in many many Android devices.
Well, yeah. I'm not sure how you'd find out about this otherwise? Unless you think there's some arbitrary line where "proper journalists" are allowed to talk about things, but regular people aren't.
If the only information I could ever get about products was positive marketing from the creators of said products I don't understand how I could possibly navigate purchasing decisions as a consumer. Open public discourse about the pros and cons of various products is crucial to a functioning market.
Obviously if you already own an Apple device maybe you read announcements on their support portal, but that doesn't really help (it is great that Apple announce this stuff though).
> Thanks a lot Maddie! Sure, I wanted to write a full exploit and achieve tfp0 before submitting because it affects the submission quality. I'm pretty busy right now, so I planned to work on it after August. I did plan to submit it, but I wanted to get an exploit first :)
As a former pentester, that's precisely the opposite of the correct thing to do. tptacek could phrase this more eloquently, but pentesters do not try to weaponize exploits. The whole point of exploiting is to demonstrate that a vuln exists. Once that demonstration is complete, weaponization serves no purpose.
(No purpose for protecting users, anyway, which is the whole point of pentesting.)
I'm surprised no one seems to care. Maybe times are changing.
I didn't realize that there might be a difference in the awarded bounty level. That's... unfortunate for them. As you can see here, he was sitting on this for some time.
The only 'correct' thing to do is whatever the person who discovered the bug wants to do. Responsible disclosure is a nicety that most people in the industry follow, it is not a requirement.
Is there a way to enable instant over the air updates for 0-day fixes, etc? I see my phone has "automatic updates" on but it still requires me to download and install manually.
Usually the automatic updates run at night when it’s plugged in and on WiFi. I’ve never seen my phone ask to do it during the day unless I ignored the update for a few days.
I also believe Apple is doing some kind of rate limiting. Whenever I’ve upgraded iOS or MacOS on day one the download is painfully slow on my gigabit connection.
The person who buys the phone is one customer. Another customer is the cell phone networks, who don't much want to use their airtime to distribute OS patches, which use a lot of bandwidth but are almost entirely invisible to users.
Then perhaps they should make it be able to be sent during off peak hours at a lower bitrate when people are sleeping, they are signed by Apple anyway, so a carrier can distribute them at off-peak from edge nodes closest to the tower, avoiding backhaul, and using wasted spectrum that was not getting use anyway.
Rather than, you know, a vulnerable device behind on updates that isn't backupable to iCloud in any way.
Yeah, I currently don't have Wi-Fi so my devices have stopped backing up to iCloud. Backups required my device to be plugged into power, connected to power, & have a locked screen! If you do not have Wi-Fi at home, would these conditions ever be met?
No extra rate limiting required. With ~1.7 billion active iOS devices, the infra must be under a bit of a strain to deliver even a relatively tiny 50 Mb delta.
For this particular update, my iPhone downloaded over 100 Mb and MacBook over 1 Gb.
I've also noticed somewhat slower downloads on new updates on my 1GB connection.
Ideally their caching network really is everywhere. I wish they'd do the Microsoft thing of letting you update based on others local downloads. For campus networks some of these updates can really load things up.
> I wish they'd do the Microsoft thing of letting you update based on others local downloads
Apple does let you do this! It just takes a little more work to enable.
You need a macOS device on your local network. Then in Preferences -> Sharing, enable “Content Caching”. This will transparently cache OS and app updates for all Apple devices on your local network. If any device requests an update your caching server doesn’t have, the caching server downloads it first and then serves it to the client. This can minorly speed up updates even when they're not already cached. It also caches shared iCloud file contents as well. (But with E2EE, so the server can’t see the content).
I keep a caching server running and really see the speed up. It's great if you’re setting up a new phone and installing lots of apps, making it particularly useful for corporate IT.
Apple's been doing this for years with Content Cache on macOS - it used to be an osx server capability, but they rolled it into the regular macOS release a while back. You just turn it on for one of your Macs, and everyone on your subnet (or multiple subnets with a little extra work) can now use a local cache of updates.
Windows by default only shares within your local network. Optionally you can turn it on for the wider internet and it does not seem to use that much data in my experience.
It also spammed literally dozens of network connections for downloading, with the result that almost all of your bandwidth would be hogged by the updates and everything else slowed to a crawl.
Your device will check for updates once a week by default. No, there is no way to do what you're describing; if every iPhone, iPad, and Mac sold in the last five years simultaneously reached out for a 2 GB update, the entire internet would probably fall over.
It could be done in a p2p way like windows updates. By default they will share update files to other devices on the local network and can also work with other devices over the internet.
A bunch of plugged in iphones on wifi would be perfectly capable of distributing update files.
I'm not sure how it would be crippled. Say they cap it so your iphone will upload 10% of the amount it downloads for an update. Right away we just took 10% of the load off the main Apple servers and very few people would be affected by uploading 100MB of a 1GB update over wifi.
I'm not suggesting the whole thing become p2p, but that the p2p network assist the dedicated distribution servers. For me, just the local p2p would halve the number of updates pulled from Apples servers because there are 2 of the same apple product on the same local network.
Obviously Apple doesn't need to do this because they are flawlessly pushing out updates every week but if they were struggling with distribution, this is a proven working system that windows uses.
Imagine all the iPhones that aren't on Wi-Fi when they get such an emergency update request: they'll melt the mobile networks trying to download 2 GB each.
The idea doesn't work.
This is why Apple devices check for updates once a week, staggered. The load is therefore mostly constant, other than the hardcore ones who manually check for an update once they hear one is available.
In this situation they could stagger the updates over the night. If you look at the network usage graphs from ISPs and such you can see that downloads drop to about 5% of the network capacity at the 00:00 - 06:00 times. There is a lot of spare capacity to push out updates at night which uses a tiny amount of data compared to the video calls and streaming that go on during waking hours.
As a side note, upload rates seem to be somewhat constant at all hours. I assume it must all be IoT security cameras.
Not as a simple end-user with no other Apple infrastructure available to you. If you're using MDM, you have additional options available to you to create such a model on your own, but for Apple's consumer offerings, Apple defines the check-in schedule for each update, and it sounds like they've selected "install overnight" for this update.
I’d love to know why people are downvoting you. You’re completely right - a minor update that would take 2 minutes on Windows or previous macOS versions now requires a 2 GB download and a 15-20 minutes install process.
I don't think the install duration is new. Most of the security updates on MacOS 10.15 have taken about 15-20 minutes to install on my 2020 MBP. And the first stage claims that it will only be a couple of minutes. Annoying as hell.
If you think about the amount of TV in hours people watch on a daily basis via streaming services, it’s weird to me that a 2 GB OS patch could be considered a problem.
Data transfer isn’t a finite resource like oil or gas.
That doesn't mean that we should just ignore efficiency. Smaller downloads means less bandwidth required on apple's side as well.
This also ignores that not everyone has stellar connection speeds, and that some people -do- have bandwidth caps (also, let's ignore that people are often mobile). Developers really need to stop making assumptions about people's hardware or internet speeds... and just do their jobs and make efficient designs. Maybe one day everyone will have super beefy machines on fiber optic networks with 10gb nics, but that's not the reality as of now.
If a patch needs to be 2gb, then so be it. But if it could be 100mb, then that's certainly better and something to strive for.
Any idea if Catalina or Mojave are effected? No security update for them yet. Perhaps the updates are forthcoming.
Apple is known for providing ZERO official guidance on the lifecycle of security updates for older versions. It really looks like they're trending towards an iOS model where there's no concept of support for anything but the latest version. (The one exception is that older iOS devices stuck on iOS 12 have still been receiving updates)
An exploit like Pegasus could just fill up the storage on the device which would prevent updates from working. Why does this 14.7.1 fix require almost 2gb of storage?
I heard something about updates for the new M1 Macs requiring users to download the entire updated image due to some signing issue? Maybe something similar is happening here.
Yes, crap. That’s what iOS and OSX has been for last 3-4 years. At one point I had wondered whether Apple laid off most of engineering and complete QA team.
And Apple has been (re)writing more and more applications and frameworks in a safe language (Swift) for years now. You just don't rewrite hundreds of millions of lines of code overnight.
>You might have also noticed the “Encrypt local backup” checkbox. MVT only operates on decrypted backups, so there’s no point in encrypting your backup here and immediately decrypting it to scan it - just create an unencrypted backup and delete it after you’re done.
You should still do an encrypted backup even if you're going to immediately decrypt it for MVT. An encrypted backup contains more complete data; it's closer to a filesystem dump. The MVT docs actually explicitly recommend this, too:
>If you want to have a more accurate detection, ensure that the encrypted backup option is activated and choose a secure password for the backup.
This isn't remote-exploitable, right? I.e. the exploit happens if you have a trojanware app installed, correct?
If it's a remote exploit I'm going to be telling everyone I know to update ASAP, but otherwise seems alright to let the regular auto-update schedule do it for them.
I would have to agree. There was a time the iPhone was considered the most secure phone. It turns out that people can keep a secret and were simply keeping the vulnerabilities quiet. Some of them years old.
iOS: Never ending zero days but you get an update for every device in the last 7-10 years to fix it.
Android: A more secure OS especially as they integrate more components using rust. But if your device is older than 2 years, you won't receive any security fixes.
So depending on your security model it changes what you should pick. If you are a high target individual or someone who upgrades their phone every year, Android is the more secure option.
If you are the average person who is not being hit with state level attacks, iOS is the most secure option.
It's based on the linux kernel and many of the core components have been rewritten in rust which the google security team seems to be suggesting is the solution to these problems.
I don't have any actual proof but my gut feeling is the linux kernel is better tested and has better security hardening than the ios internals.
Id say they are probably about the same security wise, with iOS and Android having better security in different areas (for example arbitrary file write in iOS in an end user application generally doesn't results in RCE but it can in Android).
The old thought that open source is more secure doesn't seem to really hold as there are terrible bugs in nearly everything all the time.
It should also be noted that Apple is rewriting certain parts of iOS in safer languages as well (see blastdoor for messages as an example).
The link says that iPad Air 2 and later are supported. Does this mean that the original iPad Air is vulnerable? There must be many of them still in use for Web browsing, video, games, etc.
I don't think you can conclusively say either way. iPad Air 1st gen only supports iOS 12, which was patched 40 days ago. So either they haven't released a fix yet, or it's not vulnerable.
For anyone curious, Apple has release a few updates for iOS 12 since support for some devices was dropped with iOS 13, with 12.5.4 being released in June. The iPad Air mentioned above being a nearly 8 year old device. Are any Android devices of that vintage still receiving any updates?
It feels weird to be locked out of updating my iPhone, only because some smart guy at Apple HQ decided I shouldn't be able to use my unlimited data towards downloading updates.
> Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation)
They seem to have forgotten to mention iPhone SE which also gets iOS 14
Though I suppose if this bug can be used for a jailbreak there may be some people who'd actively want to stay on 14.7 as well. It's too bad on iOS Apple forces people to choose between security and control of their own systems and doesn't at least allow a purchase-time option to have the ability to load ones own root signing certificate.