Hacker Newsnew | past | comments | ask | show | jobs | submit | MikeHolman's commentslogin

I worked on a browser team when Spectre/Meltdown came out, and I can tell you that a big reason why Firefox and Chrome do such severe process isolation is exactly because these speculative attacks are almost impossible to entirely prevent. There were a number of other mitigations including hardening code emitted from C++ compilers and JS JITs, as well as attempts to limit high precision timers, but the browser vendors largely agreed that the only strong defense was complete process isolation.

I'm not surprised to see this come back to bite them if after like 7 years Apple still hasn't adopted the only strong defense.


To add to this and to quote a friend who has more NDAs in regards to microarchitecture than I can count and thus shall remain nameless: "You can have a fast CPU or a secure CPU: Pick one". Pretty much everything a modern CPU does has side effects that are something that any sufficiently motivated attacker can find a way to use (most likely). While many are core specific (register rename, execution port usage for example), many are not (speculative execution, speculative loads). Side channels are a persnickety thing, and nearly impossible to fully account for.

Can you make a "Secure" CPU? In theory yes, but it won't be fast or as power efficient as it could in theory be. Because the things that allow those things are all possible side channels. This is why in theory the TPM in your machine is for those sorts of things (allegedly, they have their own side channels).

The harder question is "what is enough?" e.g. at what level does it not matter that much anymore? The answer based on the post above this is based on quite a lot of risk analysis and design considerations. These design decisions were the best balance of security and speed given the available information at the time.

Sure, can you build that theoretically perfect secure CPU? Yes. But, if you can't do anything that actually needs security on it because it's so slow; do you care?


This is also a fundamental property - if you can save time in some code/execution paths, but not in others (which is a very desirable attribute in most algorithms!), and that algorithm is doing something where knowing if it was able to go faster or slower has security implications (most any crypto algorithm, unless very carefully designed), then this is just the way it is - and has to be.

The way this has been trending is that in modern systems, we try to move as much of the ‘critical’ security information processing to known-slower-but-secure processing units.

But, for servers, in virtualized environments, or when someone hasn’t done the work to make that doable - we have these attacks.

So, ‘specialization’ essentially.


Your friend is genuine in their interpretation, but there is definitely more to the discussion than the zero sum game they allude to. One can have both performance and security, but sometimes it boils down to clever and nuanced design, and careful analysis as you point out.


> I'm not surprised to see this come back to bite them if after like 7 years Apple still hasn't adopted the only strong defense.

So the Apple's argument that iOS can't have alternative browsers for security is a lie.


Strange claim.

Security isn’t a one-bit thing where you’re either perfectly secure or not. If someone breaks into your house through a window and steals your stuff, that does not make it a lie to claim that locking your front door is more secure.

In any event, Apple’s claim isn’t entirely true. It’s also not entirely false.

Browsers absolutely require JIT to be remotely performant. Giving third parties JIT on iOS would decrease security. And also we know Apple’s fetish for tight platform control, so it’s not like they’re working hard to find a way to do secure JIT for 3P.

But a security flaw in Safari’s process isolation has exactly zero bearing on the claim that giving third party apps JIT has security implications. That’s a very strange claim to make.

Security doesn’t lend itself to these dramatic pronouncements. There’s always multiple “except if” layers.


> Giving third parties JIT on iOS would decrease security.

Well, at least in this case it would have greatly increased security (since it would have allowed the availability of actual, native Chrome and Firefox ports).

And otherwise: Does Apple really have zero trust in their OS in satisfying the basic functionality of isolating processes against each other? This has been a feature of OSes since the first moon landing.


If JIT is such a problem then Apple shouldn't use it themselves. Sure, they let you disable it but it's still enabled by default while everyone pushes the narrative that Apple is all about security.


JIT isn’t the problem. It’s giving control of JIT to third parties.

We can still hate on Apple, it’s just more accurate to say they don’t trust their own app sandboxes to stand up to LLVM / assembly attacks from malicious apps with JIT access.


I just don't buy that it's a special security concern at all. There are so many other possible security vulnerabilities to exploit that don't involve a JIT compiler. So why would Apple specifically restrict third party apps from JIT?

It's realistically just another way to ensure they maintain control over app distribution. Safari sucks for web apps. Third party browsers are just different shells over Safari on iOS. Apps built on things like React Native support hotfixing without slow app store reviews - but your app will be slow without JIT and rules force you to still go through reviews for feature changes.

There's no issue with any of this on Android.


> It’s giving control of JIT to third parties

Any real-world examples demonstrating how it's insecure? Here and now it demonstrably decreases the security.


The alternative browsers have the required site isolation but aren't allowed. There's no fix for Safari and you must use it. I think it's very clearly decreasing the users' security.


Binary thinking is unhealthy.

Alternative browsers would introduce other security concerns, including JIT. It’s debatable whether that would be a net security gain or loss, but it’s silly to just pretend it’s not a thing.

Security as the product of multiple risks.

Discovering a new risk does not mean all of the other ones evaporate and all decision making should be made solely with this one factor in mind.


Can you provide any arguments that JIT would in fact decrease security other than "Apple says so"?

Every major mobile and desktop OS other than iOS has supported it for over a decade. Apple is just using this as a fig leaf.


"Decreasing the security" is not binary thinking. It's just a fact today. Also, ability to run software doesn't make you less secure. I never saw any real proof of that. It's the opposite: Competition between different browsers forces them to increase the security, and it doesn't work for Safari on iOS.


I think a detached and distanced perspective must come to the conclusion that vendor lock-in isn't healthy. For security, performance or flexibility it tends to fall short sooner or later.

One could also talk about the relevance of a speculative attack that hasn't been abused for years. There can be multiple reasons for that, but we shouldn't just ignore the main design motivation of Apple here. That would be frivolous and that excludes serious security discussions.


Are you really surprised, eventually the apple distortion field starts to wain around the edges but by then people have moved on to the new shiny.


I think Meta would make the most sense. Funnel people to facebook, instagram, etc. Get all that juicy tracking data and boost additional ad revenue.

Doesn't really seem like much of a win for consumers though... it's just trading one personal data hungry megacorp for another.


I might make some sense. However, not everyone uses Facebook or Instagram; it's not quite like Google where just about everyone uses some of their services, if not many.

I suppose it would help Meta greatly expand their ad business, to places far beyond FB/IG.


Strongly disagree. 45 days to allow the authors to fix a bug that has been present for over a decade is not really much added risk for users. In this case, 45 days is about 1% additional time for the bug to be around. Maybe someone was exploiting it, but this extra time risk is a drop in the bucket, whereas releasing the bug immediately puts all users at high risk until a patch can be developed/released, and users update their software.

Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.

The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.


I'm sure this isn't something at the exec level, but it seems possible someone somewhere in middle management who oversaw used van sales wanted to increase their revenue numbers and thought cheating the odometer would be an easy way to boost their numbers.


It's not even clear if FedEx has anything to do with this. The other named defendant is "Holman Fleet Leasing", which seems to imply these vehicles were leased to FedEx.

If that's the case, then any possible scandal here would be squarely on the company selling the vehicles - not FedEx.

FedEx might just be a tacked-on name. You see that quite often with Prop 65 cases. The plaintiff attorneys add anyone even remotely related to the case, just to drive up pressure and chaos, hoping for quicker/larger settlement offers.

In this situation, even if FedEx has nothing to do with vehicles sales, they might opt to settle and write a check just to make the bad publicity go away. If you think that sounds like a shakedown, you'd be right.


In a civil case this is the right thing to do. Chances are good that a company as large as FedEx very closely met with leasing providers and discussed how they were going to get those costs down, better than if FedEx didn't lease. The lawyers would be silly not to fish for signs that FedEx knew a bit about why it could get the leasing rates it wanted.


I have no doubt the leasing company would factor in re-sale value of the vehicles... that's how all vehicle leases work.

I do doubt FedEx would be in any way involved in the details of selling leased vehicles. I can say with a high degree of confidence there never was a meeting with FedEx execs where they pitched the idea of increasing residuals by swapping odometers...


That's not really necessary. It sounds unlikely to me that there wouldn't be a paper trail with someone in FedEx's procurement about such a project, particularly if this really included FedEx racking the miles on the new odometer.

A project to cut an ongoing vendors costs is about the only way for a large cap procurement specialist to meet and exceed targets with no possibility of additional valid free market bids. That opens them right up to questions of liability, with managers knowing or avoiding knowledge, workarounds, special advice to other divisions, etc.

You don't have to go to jail to lose a lawsuit.. I worked for a company that put together a whole system for reporting these kinds of ethics irregularities in the companies favor. I don't think that was a charity, they act as a defense or at least lower punitive damages a judge is likely to award when violations still occur.


There's actually a legal reason for tacking on anyone who is plausibly liable. The basic idea is to sue everyone in a single case and let the court sort out actual liability for each party as part of that single case.

Say the lawsuit is originally against just Holman Fleet Leasing and FedEx is the one legally liable (Maybe FedEx is the one that is doing something naughty. Maybe there's some contractual language around Fedex assuming all legal liabilities for the vehicles sold.). You're going to spend a bunch of time in court arguing with Holman about if they're even the right party to sue, and your case is either going to get thrown out or you're going to lose. Meanwhile, the statute of limitations is still ticking, so if it takes a long enough time to adjudicate the case against Holman, you won't even be able to refile the same case with the correct respondent. Oops. but if the statute of limitations miraculously hasn't run out yet, that's not even considering the possibility that the kind of person who would roll back an odometer would also have a punishingly short document retention policy, so all the documents that still existed at the time you filed against Holman have long since been shredded and destroyed, so your discovery in the new case against Fedex is going to be a single email saying "yeah, we don't have anything going back that far. Oops again.

Now consider the lawsuit filed initially against both Holman and Fedex. Assuming your list of respondents is complete, the case isn't going to get thrown out because you sued the wrong person. Liability will still be adjudicated (and the case amended to drop respondents as the proper liability holder gets determined), but now you don't need to worry about the statute of limitations running out as you wait for the determination of liability against the first respondent. And the document retention clock starts with that lawsuit and covers the time where you're just determining who hold liability, so now they can't delete those documents even if they other wise would be. Both of them are now going to be legally required to retain all the stuff you list in discovery for the duration of at least their involvement in the case. Sure, they could destroy those records anyway, but that sort of thing is regularly used to infer guilt of the respondent with the worst possible inferences when it's destroyed in violation of discovery.


These things never see the inside of a court room. It'll end up as a settlement check, with none of the involved parties admitting to anything. The lawyers will then move onto the next low hanging fruit.

I've learned over time, it doesn't matter how righteous your defense is - all that matters is the money it'll cost to make the issue go away. Turns out, it's almost always cheaper to write a check than defend yourself.


> It's not even clear if FedEx has anything to do with this.

A company the size of FedEx has accountants and actuaries watching leases of this scale like hawks. It's simply not believable that Fedex never "noticed." And if they noticed they were getting much better than normal resale values but didn't ask why, that's very much the definition of complicity.


Well yeah there's that, but it can't have been just one person; there's at least dozens of people involved in the trading and maintenance of these things, there's paper trails for the purchase and I presume installing and replacing of the odometers; there's bound to be loads of people that are in the know, and since the true odometer reading can be read from the onboard computer, I find it really hard to believe that nobody caught this before.

It would've taken just one case of a mismatch between the digital and physical odometer without it being mentioned for a huge stink to be thrown up. And it'd also be the auction house's name on the block, because they should check these things themselves. If this is as widespread as they claim it to be, then even the occasional spot checks would show it.


Why should reddit have to freely support a third-party client that doesn't provide revenue for them?

The only reason is that the status quo is they have in the past freely supported these use cases, but it doesn't seem that unreasonable for commercial use API access to cost money.


why haven't they served ads via the API then? no one is stopping them.

they haven't done so because they have chosen not to. they are still choosing not to.

this is a calculated move by reddit to extract the highest amount of money possible from 3rd party app developers, and the users of these apps are who is going to suffer. reddit waited until API use was counted on by some portion of its users before they pulled this lever. it's predatory.


I don't think they should - I'd be happy if they served ads over the API. I use a third party app because I prefer the interface, not purely because it's ad free although that is obviously a nice benefit.

I wouldn't personally pay for Reddit Premium so if ads are the only way to keep third party apps viable then so be it.


I totally agree. The Chinese Room and, in general, philosophical arguments about the limits of AI always seem to come down to the belief of human exceptionalism.


>If enough of us do this with personal websites, more and more people will stop using Chrome and start using Firefox. You don't even have to cut Chrome users off from the content -- just annoy them a little, and suggest Firefox.

It's a nice idea, but your personal website doesn't matter. Most people go to a Google website at least a few times a day, and they already tell you to switch to Chrome for the best experience. And almost all of the top non-Google websites also have a vested interest in less adblockers, so none are going to riot over this.


Only the Google homepage tries to push you back to Chrome, iirc. And there is no reason to ever visit it since search is built in to all browsers now.


Beef is an extremely carbon inefficient source of calories. They require a large amount of land, either directly or indirectly (e.g. from corn fields). Pastures and farmland are not effective carbon sinks. Most land used by cattle was earlier forestland or other land that served as a carbon sink. For example, one of the major causes of rainforest destruction in Brazil is cattle ranching.


You know what's actually "calorie" inefficient? Eating leaves. Because that's exactly what humans are poor at and animals like cows are very good at (though technically they're digesting the bacteria that eat the predigested vegetation).

Plants are largely composed of cellulose, which is not digestible by humans. Just because we pass it through and just because a study found trace amounts of cellulose-consuming bacteria in our gut doesn't mean we are well developed to handle cellulose. Those are "calories" we are pooping out.

If you were to just look at the carbohydrate content of plants, well, if we reductively look at the energy density of macronutrients, sorry, carbs of any kind are not as energy dense as the fats from meat.

As an aside, this is one chief reason why calories, which are a measure of heat, are a poor metric for nutrition.

> Pastures and farmland are not effective carbon sinks.

Neither is any kind of farmland, whether it's cows that are being raised or tomatoes.

Carbon sinks only mean something in the case of fossil carbon being introduced to the atmospheric carbon cycle, but that doesn't matter anyway because no amount of trees added to the earth are going to offset anthroprogenic climate change in any appreciable way.

> Most land used by cattle was earlier forestland or other land that served as a carbon sink.

Not all forest land is a carbon sink, and not all forest land is woodland.

Also, I would really appreciate some evidence for this.

> For example, one of the major causes of rainforest destruction in Brazil is cattle ranching.

As if demolishing the Amazon is a prerequisite to cattle ranching, as opposed to something people do because the economic incentives are there, which has nothing to do with whether raising cattle in and of itself is harmful to the climate. Tons of beef are is raised on the Great Plains and has been for over a century, and the Great Plains is nothing close to the kind of carbon sink you're thinking of.


As a sibling commenter alludes to, should we ban lettuce?

Trying to tackle sources one by one is going to be a mess and give us a crap solution full of holes, and tons of argument and inaction as people fight for their preferred solutions.

Tax the carbon entering the carbon cycle and let the costs of that flow through all the supply chains, soften the blow for people via rebates and by ramping the amount up over time to a well-telegraphed future target, and things will adjust. The poor will generally come out ahead after the rebate, since they tend to generate a lot less carbon, the rich will pay much more. Things like flying will probably triple in price, or more, if we set it at the price for durable sequestration, and so people will fly much less for recreation. And maybe we'll survive as a species to see another couple generations.


The point is that it is much more land efficient for us to directly eat plants than to route the plants through animals first. Cattle use 99% of the calories we feed them for their own functioning. Only 1% actually make it into the meat that we eat.

I wasn't proposing any specific solution, just stating that eating animals does in fact contribute more CO2 than eating plants. And I have no problem with carbon taxes, in fact I'm in favor. A carbon tax could certainly cover this case if it taxes the CO2 that animals emit.


The compiler absolutely can implement tail calls, I don't know why this keeps getting thrown around. Adding a high-level directive in the spec doesn't enable the compiler to do anything, it just enforces it. The only thing preventing it is browser vendors wanting the .stack property to stay well behaved, but that isn't required by the spec and certainly isn't relevant for non-browser targets.


Most hardware architectures (and software ones based on them) the compiler controls the calling convention: how the stack is managed, what gets pushed there vs passed in registers, and so forth. The architecture may or may not have helpers like specific "return from subroutine" instructions that help manage the stack... but a lot of things are under the compiler's control.

WASM is not like that. It doesn't support jumps or that kind of manipulation. It does not expose enough control over the stack and calling conventions.

In theory could the compiler emit a virtual machine that simulates another machine where it _can_ control these parameters? Sure. It can rewrite it all into a huge loop-and-switch statement. But that is not going to result in anything close to efficient native code. By expressing tail calls directly in WASM the JIT can generate much more efficient code that takes advantage of the platform's native calling convention and tail calls can be _actual_ tail calls on the hardware.


This is simply false.

The compiler cannot implement tail calls correctly as it stands. You do not have access to modify the WASM stack and it's not present on the heap like it is for normal programs.

No compiler tricks can enable tail calls in WASM at the moment (with the exception of trampolines which always work and are absurdly slow).


I think they're referring to a compiler that is targeting WASM, not a WASM-to-machine-code compiler.


Even if we did somehow get the political will to fund a project of this magnitude, it could never work. The bubbles would get blamed for every single snowstorm, unseasonably cold day, and any other weather that happened after it was put in place.

I don't think it would last a year before it was taken down, regardless of whether or not it did what it was supposed to do or was responsible for any meteorological event.


I like this take. I think I agree with it, though I wonder if there is a limit. For example, if warming gets bad enough that there are obvious issues causing millions, would it be enough for folks to realize, "the negative consequences are worth it"?

I truly don't know.


I do. It would make things strictly worse, as CO2 continued on up, poisoning the oceans until its food chain collapses.

That has happened before, immediately before massive global extinction events. It would be a mistake not to avoid that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: