I would argue QPR updates are functionality and subject to the 6 month test.
I would also argue a closed source release in August 2025 would start the first 6 month timer (February 2026) and the source code release to trigger another timer (if they differed in any way between the closed source release).
A lot of this law is abstract and only if the EU challenges Google's approach would it be decided how it's meant to be applied in reality.
I believe QPR includes security fixes as well, which should trigger the 4 month timer
Your comment seemed to imply that a source release would trigger a different timer than a binary release, which is explicitly covered as the same thing in the law - for both the 4 and 6 month timers.
```This definition excludes signatures and some metadata and focuses solely on the payload of packaged files in a given RPM:
A build is reproducible if given the same source code, build environment and build instructions, and metadata from the build artifacts, any party can recreate copies of the artifacts that are identical except for the signatures and parts of metadata.```
> The contents, however, should still be "bit-by-bit" identical, even though that phrase does not turn up in Fedora's definition.
So, according to the literal interpretation of the article, signatures inside the payload (e.g., files that are signed using an ephemeral key during the build, NOT the overall RPM signature) are still a self-contradictory area and IMHO constitute a possibly-valid reason for not reaching 100% payload reproducibility.
I’m assuming GitHub has a fair amount of database/cache overhead for most things, especially branches. I think that most things the web client sees are all database content and that there’s no usage of git/filesystem in any hot paths for web views.
So I can easily see why having many branches is more storage than the same number of commits.
Kind of reminds me of the early days of like, Hadoop, when it was all the rage. Then people realized they could do most that stuff in a python script on a single machine in less time because of all the bookkeeping overhead and complexity.
Think about it if even JSON which is kinda simple has that many problems with parsers how likely is it that something multiple degrees of complexity higher which wasn't developed with untrusted database fiels in mind is secure?
Besides that the large majority of thinks tested in your link are irrelevant for the security in this use-case. They do matter if you need fully stable de- and re-serialization or e.g. have a fully trusted validator. E.g. for certain security token use-cases it matters. Many of the thinks which are marked as failure could even be considered as a "more robust" or "more secure" (through not fully standard conform) parser. E.g. parsing tailing commas (disallowed by JSON), rejecting certain unescaped control/special characters (which JSON allows), rejecting to long field values (which JSON allows) or similar. What matters is that a) it has a protection about to deep recursion and as such no stack overflow can happen, b) it makes sure the text it outputs is correctly encoded (independent of weather or not it was well-formed in the JSON blob) and c) it doesn't crash in a way which leads to security problems (which the test suite you linked doesn't differentiate from "acceptable" crashes).
As long as one uses a single parser to read a particular input it matters little if it produces a result that is different from what some other parser generates as long as the parser has no security bugs. And JSON is simple enough to cover a particular interpretation of its spec with test suits to make a security vulnerability extremely unlikely.
Surely if one uses one parser to verify the payload and another to use it, a disaster comes as was with IPhone verification bug.
The issue is they're heavily implying it's something specific to PinePhone. I just scrolled through the legal notices on an iPhone, and they all have the exact same wording.
Small components, such as the graphics libraries, the kernel, health data, the web browser, etc...
(not to mention they're just plain wrong about the battery/thermal not having hardware-level limits)
That's a strawman: author never compared to Android/iOS. What they did spell out,is that some of the code they found is unsafe (e.g. commented-out code to prevent false alerts, that was never being reimplemented in a safer way).
But the article does not mention that other phones might have similar challenges. If the article is about the whole industry then say that, if it is specific to the pinephone say that. Currently it's in that weird middle zone where it sounds like it is specific but all the facts would probably be applicable to many phones.
The actual end product is provided by Apple with no warranties? I find that hard to believe. It makes sense that the OSS libraries they use wouldn't have warranties provided by their developers, but an issue in any of those becomes an Apple issue when it's shipped in their product as long as the issue has an impact on iOS.
I believe so, at least on the software side. For example, the macOS EULA states:
> C. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE APPLE SOFTWARE AND SERVICES ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT WARRANTY OF ANY KIND
Sorry, but you're just plain wrong. So please don't spread false sense of security if you don't understand the issues, or the HW. You can read my other comments in this HN comment section for why.
I managed to get that much from the article but I still feel I'm missing a few pieces here. Are UEFI rootkits an actual concern, like are they common in the wild? Why should the responsibility of detecting them rest with the processor? How is this related to the Secure Encrypted Virtualization?
There have been a couple in the wild, but they aren't super common.
They've become a bigger concern with UEFI since it has a massive attack surface compared to legacy BIOS.
For a processor sitting in AWS / Azure, they want guarantees, and they're the ones EPYCs are designed for.
The responsibility has to rest with the processor, since it's the only thing executing code prior to UEFI. What it's doing is validating that UEFI was cryptographically signed with the correct key prior to running any UEFI code.
When it's first used, it is saving the key for the vendors UEFI implementation and won't allow it to proceed if the root signature ever changes (think something similar to root certs for HTTPS).
It's only relevant to Secure Encrypted Virtualization insofar as they are both implemented inside the PSP which is a separate ARM core that runs at a higher privilege level than the x86 cores (and is the core that actually initializes the x86 cores).
This is how all phones have worked for many years, but apparently it's now becoming a thing in servers too.
> In a multi-processor or multi-core system one CPU is dynamically chosen to be the bootstrap processor (BSP) that runs all of the BIOS and kernel initialization code
These days, the "bootstrap processor" is a separate core that your OS can't see. On Intel it's the IME (running Minix) and on AMD it's the PSP (ARM TrustZone)
> Are UEFI rootkits an actual concern, like are they common in the wild?
If one segment needs to worry about UEFI rootkits, it's cloud vendors. Very dedicated (nation-state sponsored) attackers could burn/use a zero-day hypervisor escape to installs a UEFI rootkit that tampers with the processor's integrated HSM (as said in the article, tampering with it has already happened and the exploits have been patched by AMD). As I understand it, If a vendor uses full memory encryption, the above exploit could lead to decrypting and exfiltrating other customers' data.
Attacker might flash a tampered BIOS from inside a VM makes total sense. It’s surprising how many SPI ROM there can be in a box, and how basically they’re all waiting there to be exploited.
Not sure why downvoted. I run blobless coreboot for precisely this reason. My only regret is not being able to find newer x86_64 gear that supports it. OTOH you can still buy in-production arm64 boxes that boot with zero blobs (RK3399).
The consortium has AMD, Intel, and Microsoft listed as contributors, so even if they didn't initially create the thing, they had a hand in it. The executable format used for UEFI is PE, which is telling.
> or, if the source code is not publicly released, after an update of the same operating system is released by the operating system provider
They have not released the source code, but they have released an update of their operating system on their reference Pixel hardware.
Therefore, all devices must update within 4 months of that Pixel release, regardless of source drops, per this law