Hacker Newsnew | past | comments | ask | show | jobs | submit | 0zymandiass's commentslogin

You've explicitly quoted that source releases are not relevant:

> or, if the source code is not publicly released, after an update of the same operating system is released by the operating system provider

They have not released the source code, but they have released an update of their operating system on their reference Pixel hardware.

Therefore, all devices must update within 4 months of that Pixel release, regardless of source drops, per this law


I would argue QPR updates are functionality and subject to the 6 month test.

I would also argue a closed source release in August 2025 would start the first 6 month timer (February 2026) and the source code release to trigger another timer (if they differed in any way between the closed source release).

A lot of this law is abstract and only if the EU challenges Google's approach would it be decided how it's meant to be applied in reality.


I believe QPR includes security fixes as well, which should trigger the 4 month timer

Your comment seemed to imply that a source release would trigger a different timer than a binary release, which is explicitly covered as the same thing in the law - for both the 4 and 6 month timers.


If you'd bothered to read:

```This definition excludes signatures and some metadata and focuses solely on the payload of packaged files in a given RPM:

    A build is reproducible if given the same source code, build environment and build instructions, and metadata from the build artifacts, any party can recreate copies of the artifacts that are identical except for the signatures and parts of metadata.```


The same LWN article says:

> The contents, however, should still be "bit-by-bit" identical, even though that phrase does not turn up in Fedora's definition.

So, according to the literal interpretation of the article, signatures inside the payload (e.g., files that are signed using an ephemeral key during the build, NOT the overall RPM signature) are still a self-contradictory area and IMHO constitute a possibly-valid reason for not reaching 100% payload reproducibility.


A branch doesn't use any more space than a commit... I'm curious what their complaint was with a large number of branches?

There are various repositories with 500k+ commits


I’m assuming GitHub has a fair amount of database/cache overhead for most things, especially branches. I think that most things the web client sees are all database content and that there’s no usage of git/filesystem in any hot paths for web views.

So I can easily see why having many branches is more storage than the same number of commits.


It might be something silly like the number of items in the Branches dropbox menu.


That actually worked really well, and provides great branch search features when you have lots.


We appeared on a list of the top 10 most egregious users of github so I assume they had database entries for these…


Ironically, Microsoft Office also fails to render most of my ODP/ODT files correctly, which are open standards. One might infer it's by design...


Reminds me of https://adamdrake.com/command-line-tools-can-be-235x-faster-...

Cluster computing can be useful, but until you're talking about petabytes of data, it probably isn't helping you


Kind of reminds me of the early days of like, Hadoop, when it was all the rage. Then people realized they could do most that stuff in a python script on a single machine in less time because of all the bookkeeping overhead and complexity.


It's far less interesting than finding an exploit in git-shell, which was how I read the headline :|


yeah. that's why I posted the (sadly, downvoted) tldr you responded to: trying to help others avoid wasting time on the OP.


I'm sure they aren't perfect, but their test suite and extensive fuzz testing is far better than any XML or JSON parser I've ever seen

https://www.sqlite.org/testing.html


I do not agree about JSON. As a format it is so trivial one can cover it with few test cases. SQL dialect that SQLite uses is vastly more complex.


You'd think so, but in practice no two JSON parsers have the same bugs, so it's more complex than you might think:

http://seriot.ch/json/parsing.html#all_results


Think about it if even JSON which is kinda simple has that many problems with parsers how likely is it that something multiple degrees of complexity higher which wasn't developed with untrusted database fiels in mind is secure?

Besides that the large majority of thinks tested in your link are irrelevant for the security in this use-case. They do matter if you need fully stable de- and re-serialization or e.g. have a fully trusted validator. E.g. for certain security token use-cases it matters. Many of the thinks which are marked as failure could even be considered as a "more robust" or "more secure" (through not fully standard conform) parser. E.g. parsing tailing commas (disallowed by JSON), rejecting certain unescaped control/special characters (which JSON allows), rejecting to long field values (which JSON allows) or similar. What matters is that a) it has a protection about to deep recursion and as such no stack overflow can happen, b) it makes sure the text it outputs is correctly encoded (independent of weather or not it was well-formed in the JSON blob) and c) it doesn't crash in a way which leads to security problems (which the test suite you linked doesn't differentiate from "acceptable" crashes).


As long as one uses a single parser to read a particular input it matters little if it produces a result that is different from what some other parser generates as long as the parser has no security bugs. And JSON is simple enough to cover a particular interpretation of its spec with test suits to make a security vulnerability extremely unlikely.

Surely if one uses one parser to verify the payload and another to use it, a disaster comes as was with IPhone verification bug.


The issue is they're heavily implying it's something specific to PinePhone. I just scrolled through the legal notices on an iPhone, and they all have the exact same wording. Small components, such as the graphics libraries, the kernel, health data, the web browser, etc...

(not to mention they're just plain wrong about the battery/thermal not having hardware-level limits)


It doesn't read to me that the author is stating it's specific to pinephone, I read it as them just saying "hey guys, look out for this."

If they use the license wording to help get it across, well, I don't necessarily agree but as they state in the article:

> I'm trying to be a bit inflamatory here, to start the conversation.


If the conversation is

> Software engineering does not put safety first

I'm 100% agreed.

If the conversation is

> The PinePhone has less quality control than Android/iOS

It's patently false

I would parse the article as the second, as it's referring to a specific phone throughout


I read it as the first, with a side of "We need to do this or FL/OSS hardware is going to look bad".

And I concur about the second being patently false. I would guess that it has more overall tbh.


>> The PinePhone has less quality control than Android/iOS

> It's patently false

PinePhone is just HW. Android/iOS is SW. Comparison doesn't make sense. Maybe compare PinePhone to Huawei Honor, or Xiaomi and their HW testing:

https://www.youtube.com/watch?v=tvRTY6sBPeE


> It's patently false

That's a strawman: author never compared to Android/iOS. What they did spell out,is that some of the code they found is unsafe (e.g. commented-out code to prevent false alerts, that was never being reimplemented in a safer way).


But the article does not mention that other phones might have similar challenges. If the article is about the whole industry then say that, if it is specific to the pinephone say that. Currently it's in that weird middle zone where it sounds like it is specific but all the facts would probably be applicable to many phones.


The actual end product is provided by Apple with no warranties? I find that hard to believe. It makes sense that the OSS libraries they use wouldn't have warranties provided by their developers, but an issue in any of those becomes an Apple issue when it's shipped in their product as long as the issue has an impact on iOS.


I believe so, at least on the software side. For example, the macOS EULA states:

> C. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE APPLE SOFTWARE AND SERVICES ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT WARRANTY OF ANY KIND

https://www.apple.com/legal/sla/docs/macOSCatalina.pdf


Sorry, but you're just plain wrong. So please don't spread false sense of security if you don't understand the issues, or the HW. You can read my other comments in this HN comment section for why.


The CPU isn't what's compromised - this is protecting against a compromised motherboard


BIOS (UEFI) level rootkits


I managed to get that much from the article but I still feel I'm missing a few pieces here. Are UEFI rootkits an actual concern, like are they common in the wild? Why should the responsibility of detecting them rest with the processor? How is this related to the Secure Encrypted Virtualization?


There have been a couple in the wild, but they aren't super common.

They've become a bigger concern with UEFI since it has a massive attack surface compared to legacy BIOS.

For a processor sitting in AWS / Azure, they want guarantees, and they're the ones EPYCs are designed for.

The responsibility has to rest with the processor, since it's the only thing executing code prior to UEFI. What it's doing is validating that UEFI was cryptographically signed with the correct key prior to running any UEFI code. When it's first used, it is saving the key for the vendors UEFI implementation and won't allow it to proceed if the root signature ever changes (think something similar to root certs for HTTPS).

It's only relevant to Secure Encrypted Virtualization insofar as they are both implemented inside the PSP which is a separate ARM core that runs at a higher privilege level than the x86 cores (and is the core that actually initializes the x86 cores).

This is how all phones have worked for many years, but apparently it's now becoming a thing in servers too.


Oh the UEFI code is run by the main processor.. somehow I had always assumed it was running on some micro-processor on the mobo.


Ah. Yeah.

The motherboard just loads BIOS/UEFI into a predefined memory address and then starts the CPU

This is a pretty good explanation https://manybutfinite.com/post/how-computers-boot-up/

> In a multi-processor or multi-core system one CPU is dynamically chosen to be the bootstrap processor (BSP) that runs all of the BIOS and kernel initialization code

These days, the "bootstrap processor" is a separate core that your OS can't see. On Intel it's the IME (running Minix) and on AMD it's the PSP (ARM TrustZone)


> Are UEFI rootkits an actual concern, like are they common in the wild?

If one segment needs to worry about UEFI rootkits, it's cloud vendors. Very dedicated (nation-state sponsored) attackers could burn/use a zero-day hypervisor escape to installs a UEFI rootkit that tampers with the processor's integrated HSM (as said in the article, tampering with it has already happened and the exploits have been patched by AMD). As I understand it, If a vendor uses full memory encryption, the above exploit could lead to decrypting and exfiltrating other customers' data.


Attacker might flash a tampered BIOS from inside a VM makes total sense. It’s surprising how many SPI ROM there can be in a box, and how basically they’re all waiting there to be exploited.


Cloud vendors should be using coreboot, not UEFI.


Not sure why downvoted. I run blobless coreboot for precisely this reason. My only regret is not being able to find newer x86_64 gear that supports it. OTOH you can still buy in-production arm64 boxes that boot with zero blobs (RK3399).


One of the cloud vendors created UEFI.


Then they know full well how bad it is!

*Jokes aside, I think Intel created UEFI (for Itanium?), not Microsoft?


The consortium has AMD, Intel, and Microsoft listed as contributors, so even if they didn't initially create the thing, they had a hand in it. The executable format used for UEFI is PE, which is telling.


Their statement says "It is a defense-in-depth feature", so maybe not?


is the think of the children excuse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: