On the one hand, this is fantastic — vendors should absolutely publish proper commitments like this of their firmware images.
On the other hand, this is completely missing a proof that your device is running the firmware it claims to be running. The check of the device firmware is:
The Pixel is (or at least should be [0]) capable of attesting to its running firmware. There would be additional bonus points for having the bootloader stages flash QR codes containing the hashes of the next stages, which would enable very straightforward verification.
With a secure element-based approach, the device would need to be able to convince adb that it has a genuine Google secure element and that the secure element says the fingerprint is such-and-such. The former would likely need additional data in the Merkle tree to avoid a situation in which Google pushes out a targeted, properly signed, but malicious secure element firmware that lies about the fingerprint. (And the entire point of binary transparency is to make this type of attack difficult.)
With a bootloader approach, in principle the entire verification could be chained from ROM and no secure element would be needed.
[0] I haven’t dug through exactly what the Google secure element does. I’m quite confident that it can attest, to Google, something about the running firmware, because this is useful for DRM and the various (horrible and mostly useless) safety checks and attestations available through the Android APIs. But maybe it actually can’t attest to the firmware fingerprint to any party other than Google.
I worked in Android Security. Pixel phones (and all other phones running Tiramisu+) can attest to the full DICE chain's integrity to any app that requests it. This can be done through the KeyStore API.
>> Pixel phones (and all other phones running Tiramisu+) can attest to the full DICE chain's integrity to any app that requests it. This can be done through the KeyStore API.
I do not see this - the KeyStore API available to apps still only returns an attestation as a normal X509 chain anchored into a local key, which is certified by Google. That is not DICE. Actually, there is no mention of DICE at all in any recent Android API docs.
IIRC when your phone requests the attestation cert from Google it uploads its DICE chain. Then Google verifies the chain and gives you a 30 day cert (with a chain starting from a Google root, then an intermediary, then your leaf).
You might be able to see some custom Google extensions on the X.509 cert which will have some extra info. But that might get stripped when the cert is shown to an app.
I don’t remember all of the details. I worked on the infrastructure for the key acquisition but most of it was already set up when I joined and I was only on the team for a few months.
It is entirely irrelevant because cell phone companies add “analytics” (spyware) as part of their firmware stack… which Google and anyone else cannot block.
There's a BIG difference between "My mobile carrier, who knows how much I spend with them on a monthly basis and my home address, probably date of birth and other customer details, also happens to know where I physically am" and "This piece of malware on my phone is intercepting every message I send"
> It is entirely irrelevant because cell phone companies add “analytics” (spyware) as part of their firmware stack… which Google and anyone else cannot block.
Who is that third party cell phone company adding spyware when I buy my Pixel phone from Google?
(Of course, Google can add spyware themselves; but they can also block it.)
If you buy your phone from a carrier. If you buy directly from a vendor I don't see how they (carriers) would be able to inject it. Radio firmware however is, as I understand it, a completely different thing...
Radio firmware has been a misnomer for a very long time, because the so-called baseband processor it runs on has access to and can control much more than just the radio.
As far as I know, radio firmware on current phone SoCs has full, unlimited read/write access to all of the available RAM and can preempt and modify any process running on the application processor, so any indications regarding authenticity of any software running on the application processor - as provided by software running on that same processor - are effectively moot.
TL;DR: He who controls the radio (firmware) pwns your phone.
Yes, but the point of binary transparency is to assure device owners that their devices are running official Google firmware, even if Google’s signing key is compromised or if a court orders Google to sign a malicious image.
A verified boot chain does not provide this assurance.
That’s a slightly different issue. Right now, Google could push a customized malicious firmware just to you. No one else would have a copy, and no one would be able to reverse engineer it to detect that it’s malicious.
With a fully functioning binary transparency system, Google would have to publish the fingerprint of the malicious firmware. An even stronger system would require that they publish the entire image as well. Then the attack could be detected.
According to Wired, Android Verified Boot (AVB) prevents the computer owner from changing their mind about an OS update. No going back to the previous version. Leave no escape from from the latest improvements in Google's data collection to support its advertising services racket.
The point of anti-rollback technology is to make it so security holes are patched permanently: an attacker can't unwind updates back to the point at which the known exploit works.
The details page [1] of the transparency log explains the exact threat model that they are trying to address with this:
> Transparency systems can be used to detect—and thus deter—supply chain attacks. Let's address some examples:
Suppose an attacker maliciously modifies a Pixel image and even manages to sign it with the key that Google owns. Anyone who receives the malicious image can query the binary transparency log to verify the image's authenticity. The user will find that Google has not added the corresponding image metadata to the log and will know not to trust the compromised image. Since publishing to the log is a separate process from the release process with signing, this raises the bar for the attacker beyond just compromising the key.
So effectively, this seems to secure against malicious actors messing with Google's (or AOSP's) own build process, i.e. by somehow inserting an MITM between the build and the signing stage.
I don't know how Google's or AOSP's build systems are set up, but I'd suspect that not many entities are able to mount a successful supply chain attack on internal networks. So (conspiracy hat on), I wonder if there is something more behind this, i.e. some recent hacking incident or a warning of one.
> I don't know how Google's or AOSP's build systems are set up, but I'd suspect that not many entities are able to mount a successful supply chain attack on internal networks.
One option is that it's not Google protecting you against Google, it's Google attempting to drive other Android vendors towards the same infrastructure in order to protect users against vendors being compelled to produce targeted backdoored firmware images.
Presumably there's something special about ro.boot.vbmeta.digest that would prevent a malicious ROM from lying about it? As in the ADB protocol being served by literal ROM code?
That value comes from the firmware and is not writable by Android. As for whether the daisy-chain of strcpy() calls are robust, and without a rich history of 0-days, that is something I don't know.
I don't know that much about the security model. But I think if you really want to verify device integrity you should use an attestation key. That is supposed to be un-spoofable (although bugs can still occur in any software, which would defeat the integrity guarantees).
SMH at the lack of though or understanding some ppl have for layered device security...
This has an obvious clear benefit: all of the people who have said "oh well Google could be compelled to sign a malicious update for a single user"... This is an attempt at solving that via a transparency log.
Granted, I think for this to matter much all-up, it would need to apply to PSF, Apex, general app updates, etc.... Which I'm pretty sure this doesn't even attempt to touch.
I'd love to hear Google speak to that but that seems like a huge can of worms compared to the image based hashing, signing, verification that is already part of the tooling, ecosystem and consciousness.
Recently I developed a presentation* about immutability as a design concept in computer security. As part of it, I have slides which cover Certificate Transparency implementation[0], which uses Trillian[1] as a distributed ledger. Part of Trillian's documentation includes a Firmware Transparency[2] example. For the year or so I've been aware of it, I've thought that it's a great idea, and wondered if it would ever grow as a project/practice. Digging through the links in this announcement, it appears Trillian is the basis for the distributed ledger here. Glad to see the idea has been taken further by Google!
Well, any security aware owner of Pixel phone is already on GrapheneOS, not only to be protected from attacks but also to be protected from Google itself. So I dont consider this as some huge benefit.
This is cool for Pixels, but the problem with the Android ecosystem is that most people are running customized OS images from the manufacturer (Samsung, Huawei etc). These images will also frequently contain insecure bloatware from telcos that can't be installed.
My concern is over e-waste. If it becomes impossible to install custom firmware on a device then a lot of electronics will just become garbage when support ends or the company disappears.
How many people actually extend the lifetime of their phone via custom firmware?
I imagine most people with old phones simply continue using their devices after support ends.
I used the OnePlus One way past its EOL. Replaced battery and screen multiple times, custom ROMs based on linageOS tuned by the community to revive important features like RAW, 60s Shutter Speed and introduce some new ones not unlocked by the OEM, like manual focus. This included firmware modifications of various subsystems to improve the QoL, like introducing Stereo Playback via the speaker and tuning the Speaker curves to sound acceptable.
There is a lot that can be done and there is a lot that people do.
That's only enthusiast do in developed world. I'm curious does poorer people in developing world uses alternative ROMs. Is there any store that supports ROM flashing?
We'll never be able to judge that from the POV of our fast moving first world tech sphere and it's one-time use consumerism. That makes it all the more important to keep those options and avenues of modification not only alive, but strong.
As individuals we can of course offer mere anecdotal evidence. Coming from Belarus, I can say that the amount of talented hackers repurposing old tech was always strong there, though I don't have a strong connection there anymore. I recently covered the QuadCore mod for ancient thinkpads and the firmware replacement that entails: https://youtu.be/Fs4GjDiOie8 I received multiple requests on Details and Walking through individuals performing similar upgrades from Latin America. To me, the spirit for reviving trash grade tech to our modern sensibilities is alive and well.
Going off how people treat cars, the majority would do this the practice is ever allowed on a wide scale (to facilitate commercial services that assist them for a small fee).
The issue is that most phone manufacturers don't allow owners to do this, and most that do, do pull shenanigans that make it unpredictable and difficult.
Poorer people would absolutely choose to pay 25 dollars to extend their phone's life (replace battery + reflash to a supported firmware/os) vs buying another 200 dollar phone.
I'm hoping the EU (and the rest of the world) steps up and does what the US should have done a long while ago.
I used to do this for as long as the batteries and soldered flash would last. Though I've given up trusting randos on XDA devs and fiddling with ROMs to keep everything working.
>Pixel Binary Transparency responds to a new wave of attacks targeting the software supply chain—that is, attacks on software while in transit to users. These attacks are on the rise in recent years, likely in part because of the enormous impact they can have.
They say its "on the rise", but the linked report in the blog talks about transient OSS dependencies (among other related things), and not binary/firmware level tampering. Can someone explain how this would help avoid the Log4j vulnerability?
I think it could be argued that the basic form of this was around as X.509 certificates far before blockchains. I'm not sure what the history would be before that, though.
Also, in my mind at least, the idea for doing secure attestable history comes from git. Once you have a crypto-DAG it's a fairly easy transition into other data structures although ordering becomes an issue with trees. I'm almost sure something else must have existed because it's not a far leap to a crypto-DAG when you are doing things like HMAC.
That sounds like a nothingburger. The bootloader always shows a warning if the image isn't signed by Google.
Google itself should be able to sign whatever code they want and mount whatever attack they want.
Could someone explain if this provides any value over signature check in the bootloader?
I believe that the bootloader can't be updated with a non-Google-signed version. And if there is a vulnerability and a malicious actor does that there would be no way to safely get the hash to verify against the log.
> Could someone explain if this provides any value over signature check in the bootloader?
If every release has its checksum entered into an immutable log, and can't be installed if it's not in the log, it makes it somewhat detectable if someone infiltrates, tricks or forces Google into signing a backdoored version for a targeted attack.
It's unlikely anyone would infiltrate Google to make a custom-signed image to target me - but if you were Obama or Trump or Snowden or Khashoggi you might be worried about that.
I say "somewhat detectable" because if there was an unexplained signed update logged, Google could just say "sorry, bug/misclick/new guy" and that'd sound plausible to a lot of us.
> if there was an unexplained signed update logged, Google could just say "sorry, bug/misclick/new guy" and that'd sound plausible to a lot of us.
Precisely. And, practically, you won't be able to audit such update, at least audit it quickly. Even if you find some malicious code they can always blame it to a rogue engineer.
I love on the one side that Google takes security serious. On macOS, Windows and Linux I can easily set up a device in a way that makes it (a decent passphrase required) all but impossible for any attacker to retrieve the data on the device or modify the OS if the device is at rest. LUKS/Bitlocker/FileVault encryption and UEFI Secure Boot (x86)/SIP (Apple) make sure of that.
The problem is that way too many mobile applications these days take extraordinary difficult steps to make sure it's impossible to exercise the freedoms granted by the GPL in practice. If you "root" your phone, it's a constant game of whack-a-mole to keep banking, Netflix, Google Pay and other applications running.
On top of that it's impossible to put a new root of trust in place... I don't want a secure boot warning message if a firmware is running that I flashed on the device, I want it visible when someone else placed a manipulated firmware on it. And that point applies to Apple just as well.
> I don't want a secure boot warning message if a firmware is running that I flashed on the device, I want it visible when someone else placed a manipulated firmware on it.
And how exactly do you propose achieving that, when that someone else might have tampered with the phone before you got it?
The goal of Google's security architecture is that a dodgy phone seller/repair shop can't pre-root the phone and siphon all your private data to Mr Evil unless they have access to a silicon fab to remake the main CPU with a new trust root.
> And how exactly do you propose achieving that, when that someone else might have tampered with the phone before you got it?
Wipe the device as a condition of unlocking the bootloader root trust keyset. Easy, and more secure than any classic x86 UEFI bootloader. That gets rid of the threat of dodgy repair shops.
The only issue will be manipulating devices before they're sold the first time, but tamper-proof packaging resolves that.
Tamper-proof packaging is a poor replacement for a first-time boot replacement warning. Not to mention the sheer impracticality of properly implementing tamper proof packaging (the factory would have to cover the packaging in shiny nail polish or something, encrypt and send a high-res picture of that somehow to the final buyer across the supply chain, at which point the final buyer makes sure the glitters align). Much better to do it the way it's currently done
If a repair shops wipes someone's phone they'll be pissed, but they aren't going to throw out the phone. As soon as they get back that phone they'll reinstall all their apps and log back into all their accounts and any malicious firmware added by that repair shop will wreak havoc.
I 100% agree that we should have ways of getting rid of these warnings on our own devices, but this isn't a simple problem.
> If a repair shops wipes someone's phone they'll be pissed, but they aren't going to throw out the phone. As soon as they get back that phone they'll reinstall all their apps [...]
This depends on whether consumers are made aware that a repair shop that "accidentally" wipes your phone might be trying to steal your bank account etc.
While education is difficult, the consumer has an advantage in this scenario because the event itself is impossible to miss and very disruptive and could lead them to start searching on the internet for advice.
Apple frequently tells customers that their data would be wiped if they send their devices in for repair, I don't see why customers would challenge a repair shops assertion - it doesn't seem implausible either!
I guess the lesson is/would-be less "all resets are signs of nefarious intent" and more like "if seems reset, always reset it again yourself to be safe."
Depends on what "wipe the phone" means. That could involve clobbering early-stage bootloaders and firmware on daughter microcontrollers - the kinds of things that can only be replaced through JTAG and a good bit of tribal knowledge. It doesn't stop the most sophisticated attackers, but it certainly would disincentivize a large-scale attack of this variety, especially when you consider the wild variations that exist between Android phones at a hardware level.
A signed-by-google first-stage bootloader could display a message warning the user before handing off to an unsigned second-stage bootloader.
>The goal of Google's security architecture is that a dodgy phone seller/repair shop can't pre-root the phone
I'm curious how big a problem this was with refurbished second-hand laptops that often come with a pre-installed OS. At the very least, I have the freedom to reinstall Windows/Linux.
We need to find real solutions to the e-waste problem, it's unacceptable to be throwing away so many working phones simply because their manufacturer has decided to stop publishing OS updates after 2/3/4 years. I own a few older computers that are almost a decade old and run the latest version of Debian/Ubuntu. There is no reason phones should be treated any different.
That's an easily solved problem. We already have the pre-boot warning. It fixes that problem just fine. Add a reboot on initial setup and make it scarier if you're just setting up the phone and you'll be fine. A week after you've setup the phone there's no reason why you'd keep it.
On Google Pixel devices you can load your own verified boot key into the "avb_custom_key" partition and then it will only boot OSes signed by it (it will also say that you are running a different operating system in the boot screen).
GrapheneOS for instance uses this mechanism.
Unfortunately you can only register one key and you have to wipe the device to change it, but that's still fine for most use cases.
Some devices allow you sign your own images and relock the bootloader [0].
This allows you to modify your image, sign it, flash it and relock your bootloader. If you have the infrastructure in place, future updates could be rolled out as OTA updates for your custom rom.
You'll still fail hardware attestation and afaik whatever api that returns the boot status differentiates between a vendor signed and a custom signed image.
So not perfect, but you lose the bootloader unlocked nag.
None of them appeal to me for various reasons. Instead, I'm switching to a dumb phone and will be carrying a Linux pocket computer for my mobile computing needs.
[Common] Malicious software. Software intended to cause consequences the unwitting user would not choose; especially used of {virus} or {Trojan horse} software.
What if a program installed by Google causes consequences the user would not choose. For example, "Google Play" or Chrome.
Google has been fined 4.125 billion euros because it forces computer manufacturers to install these programs by agreement. Imagine if Google had to pay computer owners (ad targets) not to modify/remove the spyware.
Projects exist to remove the spyware, so-called "de-Googled" Android. Clearly some computer owners would not choose these programs.
These are "witting users" under the malware definition.
Proponents of Google's practices will sometimes argue that witting users, e.g., people commenting on HN of their dissatisfaction with Google's practices, are not relevant. Only the "majority" is relevant. They will frequently use the phrase "most people".
However, these are "unwitting users" according to the malware definition. They are not "choosing" Google Play or other Google spyware pre-installed on their computers. Rather, they are not presented with a choice.
"Millions of users trust Google." Well, considering Google pays other companies to pre-install their software on millions of computers and to set the default search to Google, that's not surprising. We are all forced to "trust" the things we cannot change. What other choice do we have.
The elephant in the room, for Google and Microsoft, is verifiable security is worthless if no one actually trusts the organization that released the verified firmware.
Android should be have been split off from Google a long time ago.
Being more secure for Google and [ad-/tracking-supported] app-vendors is more-or-less independent of being more secure for computer owners.
If some change happens to benefit both groups, I assume it's a happy coincidence. I like to think that most Google employees try to implement win-win stuff like that, but it's pretty clear that they frequently worsen user security/privacy to improve their bottom line (nearly all their revenue comes from spying on people, and preventing them from effectively opting out).
Seems like a bit of a nothingburger, now you have two ways to verify your binaries came from google and are unmodified instead of one way, doesn't change much.
Call me a skeptic, but I see this as political theater. If Google themselves wanted to peek-at-you, they would never have to look as deep as the firmware. If a _foreign_ government wanted to, and they could 'poison the well' this obviously helps.
I feel like this is part of Apple's cover story for excessive serialization 'we just want to make sure the parts in your phone are the parts we own'.
On the other hand, this is completely missing a proof that your device is running the firmware it claims to be running. The check of the device firmware is:
This verifies nothing.The Pixel is (or at least should be [0]) capable of attesting to its running firmware. There would be additional bonus points for having the bootloader stages flash QR codes containing the hashes of the next stages, which would enable very straightforward verification.
With a secure element-based approach, the device would need to be able to convince adb that it has a genuine Google secure element and that the secure element says the fingerprint is such-and-such. The former would likely need additional data in the Merkle tree to avoid a situation in which Google pushes out a targeted, properly signed, but malicious secure element firmware that lies about the fingerprint. (And the entire point of binary transparency is to make this type of attack difficult.)
With a bootloader approach, in principle the entire verification could be chained from ROM and no secure element would be needed.
[0] I haven’t dug through exactly what the Google secure element does. I’m quite confident that it can attest, to Google, something about the running firmware, because this is useful for DRM and the various (horrible and mostly useless) safety checks and attestations available through the Android APIs. But maybe it actually can’t attest to the firmware fingerprint to any party other than Google.