You will live your whole life and never see a funnier CA response on m.d.s.p. that TrustCor's response to Mozilla, Apple, and Google cornering them on the shady stuff they seemed to have been up to. The TrustCor representative is clearly staring death directly in the face and doesn't realize it, instead trying to brazen their way through it.
"To conclude this discussion, Mozilla is denying the Japanese Government
ApplicationCA2 Root inclusion request. I'd like to thank everyone for your
constructive input into the discussion, and I'd like to thank the Japanese
Government representatives for their patience and work to address issues as
they have been discovered. I will be resolving the bug as "WONTFIX"."
"We are preparing to revoke certificates immediately, rather than waiting for certificates issued prior to 2017 to expire.
However, even if we revoke those certificates, if your judgment is not affected and our request is rejected, there is no point in doing it.
Please let us know if our request will be accepted by revoking all the certificates we issued prior to 2017."
That quote by itself is a reason to drop a CA. If you're only going to do the things you're REQUIRED to do when you're threatened with rejection/revocation, then you shouldn't be a CA to begin with. The arrogance shown in that thread is appalling.
If only regulatory agencies handled bad actors in such a no-nonsense manner. Instead it's a political mess, where larger players can strong-arm the regulator.
Glad it gave you a laugh. To me, TrustCor isn't funny. They are obviously malicious. For this case, it's just incompetent that lead them to be banned.. A way funnier.
It's hard to find resources in English describing the Cybersecurity environment and posture in Japan, but having worked with Japanese organizations in that space, they have decent talent but middling institutional knowhow and process. Basically slight more competent Germans if I'm being honest. On the other hand, South Koreans organizations tend to have their shit together.
I think she knew what was going to happen, but I don't think she could've prevented it without lies (which would be found out quite quickly). I've seen people talk around difficult questions, but this is something else.
Writing small novels in response to simple questions ("who owns the company", "in what countries are you a legal entity") was never going to work with the CA forum. It might work in corporate contexts where nobody has the time to read emails, but it only seems to have raised suspicions more in this mailing list.
When Serge Egelman came up with a plausible defence for why both the allegations and the responses to them could be true at the same time (the representative and the current employees actually being victims of a company set up in bad faith and left behind by government/bad actors) she becomes extremely defensive. The thinly veiled accusation of sexism definitely doesn't help their case at the very least.
I find this a very strange response by a CA representative, or a CA in general. If the company is actually the victim of something bad, it can definitely learn a thing or two about clear communication. I do like the way the paragraphs of her email was structured (in response to, prior context, actual answer), though.
It's quite a funny example of corporate culture running headlong into geek culture, because in this case geek culture has the upper hand. I feel that in other circumstances, the geeks trying to get a straight answer would be labelled "difficult" and routed around.
It's not very amusing to me that geek culture can't handle objectively reading writing from a discipline other than their own.
Imho, Matthew Hardeman makes the best point in the entire thread:
"Something that yet again concerns me in this discussion is an issue that I touched on previously in the discussions related to Dark Matter: that unless the program is requiring transparency as to corporate governance and management/operations authority, and establishing a basis for trust and accountability at the level of those individuals empowered by participation in the program, I believe we will continue to see these subjective trust decisions again and again. [...]
I once again humbly submit that I believe the executive and operational management teams of the CAs in the programs should be required to submit to the root program personal attestations as to their position and authority along with a commitment to inform the program promptly if anything has altered or replaced their authority. I believe there should be an explicit understanding that failures by such person(s) would be held against such person(s) individually and would bar their involvement at other trusted CAs for an indefinite period.
I yet again advocate for a measurable standard for holding CAs accountable at the executive management / operations level with costs taxed upon those persons who have made commitments to the program and failed to honor them.
It seems likely to me that one or more presently included CA could be reasonably described as owned by Blackrock or Vanguard. [...]"
This is why the CA "Web of Trust" model was dead on arrival, and we really should be doubling down on trying to render cryptography, and building personal trust networks approachable for the layman.
That would upset far to many balances/be way too hard however.
Even now you can see in this thread some people going waaayyyy out of their way to try to pretend like this CA is an embattled party on an island and companies that aren't the CA don't matter even when they share entire officer structures, dev teams, etc. It's wild.
> When Serge Egelman came up with a plausible defence
I honestly feel like his explanation is likely, but she was in complete denial knowing how she's effectively left holding the bag.
Is she and the company now in a better place having everything revoked but still standing by her assertions? or would it have been better accepting the possibility of what happened and instead tried to work with everyone involved?
Serge was giving her rope to pull herself out of this hole and she just seemed to want to wrap it around her own neck instead...
>Writing small novels in response to simple questions ("who owns the company", "in what countries are you a legal entity") was never going to work with the CA forum
>Unknown until recently by any employee officers of TrustCor we and Measurement Systems S de RL had in common a group of investors who represented funds (groups of companies and other funds), not individuals. Even though we shared a common group of investment funds, we have always operated our business independently of any other company and have exclusion provisions in place to protect the CA business from having access-to or being controlled by or influenced from any third-party, investors, equity-holders, or anyone other than TrustCor’s CA Approving Officials and employees. To the best of our knowledge (and our focused investigation) there is not and has never been shared ownership with any defense company or any USA company. This common group of investors with Measurement Systems S de RL. had already dissolved mid 2021, before these recent claims were publicized, meaning as a natural course of business and not as a reaction to any claims or adverse events. In 2021 TrustCor ownership was transferred from the initial investors/founders to the employees of TrustCor. The legal process has been very step-by-step and very slow, especially due to the protracted treatment and recent death of one key founder, Ian Abramowitz. Nonetheless, it is underway and irreversible, and the common investment vehicle was dissolved over a year ago.
I'm not sure how that isn't a straightforward answer to a somewhat complicated situation in transition.
The thing is that it wasn't a comprehensive answer. It was a wall of text that attempted to appear like it was giving an answer while not actually clarifying anything.
As a CA, even a hint of malfeasance should require in depth transparent answers. Not this legal BS peddling.
So you think other CAs would field these questions better?
Because from what I read, I could see a decent chunk of them getting tossed out at the end of the star chamber as well.
And then we'd be left with a more centralized system with a few large players.
If corporate structure, ownership, and governance is important (it is!), then there should be standard processes, not ad hoc lines of interrogation whenever a CA happens to be noticed.
> And then we'd be left with a more centralized system with a few large players.
The CA system is already designed like a centralized system where the operators (CAs) have full privileges. Adding another CA just means adding another potentially malicious or vulnerable single point of failure that could bring down the whole system.
What sealed it for me is that Rachel constantly tries to distance the CA arm from MsgSafe.io's operations (to try to avoid MsgSafe's problems being used to discredit the CA), but it turns out that Rachel herself is VP/Director of Operations for both companies. That she never mentions this while trying to distance the CA from the not-really-encrypted email service indicates a level of intentional deception that completely validates Mozilla's response.
Kathleen obfuscates it a bit by putting the name of the director in a footnote, but here it is put together:
> The same individual was responsible for the day to day operation of both TrustCor’s CA business and MsgSafe. They are listed on TrustCor’s website as the VP of TrustCor’s CA operations and the Director of Operations for MsgSafe. [2]
> ...
> [2] Rachel McPherson is listed as the Vice President of Operations, having “access-to and control-over the CA and CA Business Operations” in a company document submitted privately by Rachel to Mozilla. Press releases on TrustCor’s website list Rachel McPherson as MsgSafe.io’s Director of Operations, e.g. https://web.archive.org/web/20221108224150/https://trustcor.....
Exactly. The entire list of her responses are "pounding the table" attempts to throw a lot of misdirection around how the companies are separate, while avoiding at all costs the evidence that they really aren't except on paper.
It feels like all the people swayed by this have never worked for smallish shady clusters of companies or with them, and it shows.
I'm not familiar enough with it to assess its veracity, but Serge Egelman's Rachel-as-victim theory is interesting in an I-wouldn't-have-thought-of-that way.
> Based on that understanding (and again, please let me know if any of that is incorrect), I personally believe it's possible that Rachel may be a victim here, if she really is the primary TrustCor shareholder (e.g., maybe the company was given to employees, after it was no longer useful). If TrustCor's private keys were compromised at its founding or before (e.g., so that Packet Forensics could sell TLS-interception boxes), the company itself would have little continued value, so long as it passed its audits (and remained trusted by browsers and operating systems). It's therefore possible that Rachel had no awareness of this, and as a result is in the denial phase of realizing that she's the victim of a scam. This is not a statement of fact, I'm just offering it as a possibility.
In effect, no. Such a "rotation" requires all clients trust the new root. Your PC probably gets software updates every week or every month, maybe your phone gets one a month or per quarter (until support runs out). But how about your smart TV ? Car ? Internet-connected Doorbell ? That IP phone you disconnected last winter and forgot to re-install ?
Big CAs tend to ship new roots periodically, in addition to their existing roots, so as to begin gradually phasing over, over a period of several years. So e.g. HugeCA mint a new key in 2001, the major trust stores decide to trust it in 2002-2003. In 2005 HugeCA offer certificates from the new root to customers who prefer the new root and understand the risk. Unfortunately these customers see high failure rates, e.g. the all-in-one printer scanners from a big name company they used in their offices have firmware last updated in 1998. In 2008 HugeCA confirm the big name company printer/scanner firmware update is complete and begin in 2009 selling these certs more widely, there are a few hiccups, they don't work in Windows ME which one customer insists is "the latest version". But maybe by 2011 HugeCA can announce retirement of the old CA root with a retirement date of say 2015.
Yes, you can do this, particularly it can make sense for a long-lived CA to have root #1 sign a certificate for root #2 as-if it was an intermediate, so then you can bring root #2 into use but older clients can trust it by sending them that intermediate certificate, while newer clients should rely on their direct trust.
Some older clients unfortunately can get into a state where they distrust root #1 (e.g. because it is old) but they know it exists, and so even though they trust root #2 they can see this alternate path via the intermediate to root #1 and reject the whole mess. People shouldn't write software which does this, but they did.
I think root certs are long-lifespan, kept in hardware security modules / other cold offline storage, and only used to periodically sign shorter lived intermediates that are the main thing signing leaf certs for sites.
For example, the DigiCert Global Root CA in HN's cert chain is valid from 2006 to 2031.
They are but by their nature as core components of OS and devices they have to be long-lived, because they “bound” the lifetime of their sub-certs.
So root-certificates usually have an expiry of a decade or two, intermediate certs a few years (about 5), and then leaf certificate lifespans have been cut down drastically: used to be you could get a 10 years cert, then the CABF cut that down to 5, then 3, then 2, and I think now it’s a year (plus a one month grace period, so 13 months), and some registrars issue much shorter certs than that, notably Let’s Encrypt which issues 90 days certificates and basically requires automation.
> notably Let’s Encrypt which issues 90 days certificates and basically requires automation.
Which massively reduces expired cert problems. If you only need to do it once a >=year you do it manually and at some point someone forgets / the guy with the calendar reminder leaves...
If you have to do it every 2-3 months you automate it with alerts for when it fails.
Reading feels a bit like reading through GPT-3 essays. The fabric of spoken word is there and your eyes can glide through it, but you can't summarise any content you have just consumed - then you focus and refocus and notice there is no content, just words.
It is clear those responses were written by an attorney—the liberal use of underlined and bold text, Roman numerals to indicate the different sections, clarifying the terminology at the beginning of all their emails. It's an obvious giveaway.
Also, they tend to be overly verbose but with no real substance as to waste everybody's time so it might look it was GPT-3 instead.
You think so? It strikes me as very amateurish. Can't get to the point, ad hominem, says things that look incriminating (like, "we can't share that detail or we expose ourselves to lawsuits").
Anyway I'm not claiming it _actually_ was GPT-3, rather that it similarly lacks substance. It's instead creating an illusion of content with words.
> we can't share that detail or we expose ourselves to lawsuits
That is not incriminating, that is standard practice: say the least amount possible unless compelled by the law, because even benign things could later be used against you.
Corporate legal counsel often doesn't share the reasoning behind their advice for this very reason.
I'd like to know why Google, Mozilla and Microsoft did no due diligence on this company before approving their root CA. Why does it take being called out by an independent researcher before these basic questions get asked?
I read through it all and multiple times I thought I was going crazy due to the constant repeating of the same answers and muddying of the water. From how they responded I was already ready to believe they should lose their CA status but this right here cinched it for me:
> MsgSafe offered this app (with the malware SDK) to the public via the Google Play store as part of a public beta, where it was free for anyone to download up until earlier this year. MsgSafe publicized its mobile app on social media. As of this writing, MsgSafe's website still links to the Google Play store (though the app was removed earlier this year):
>> This is not correct. The MsgSafe.io beta app was actually a testing beta, which could not be accessed by anyone via the Google Play store. The only way the app was accessible was to our employees or via a unique social media link MsgSafe.io sent out over 3 years ago, and that link only worked for a limited time. Using that unique social media link, we can tell that less than 1/10th of 1% of MsgSafe.io's users would have had the opportunity to download to test the app, and the actual number that installed it was much lower. Also, your statement about the MsgSafe.io website still linking to the Google play store is completely false. Did you even check this before making this post? In the screen-shot you shared, the icon labeled "Download on Google Play" does not direct you to the Google play store, it directs you to https://www.msgsafe.io/android which actually takes you to a 404 page on the MsgSafe.io website - it never leaves the MsgSafe.io website. Had you hovered over this link you would have seen this. Of course we're not proud of an outdated website and this should be updated, but that doesn’t make your false claim true.
Yes, but the fact that she's the largest shareholders seems immaterial to identifying the corporate officers.
And as for ownership, wouldn't knowing the person you're talking to is the largest shareholder enhance their credibility to negotiate and discuss issues with you?
I believe that's the intended effect. Sheer volume of misdirection, vague answers, and thinly veiled threats leaves people tired.
Anyone might reasonably not have the energy to keep picking apart the nonsense from long evasive answers.
Still, as pointed out in the email thread, a CA is also evaluated on how trustworthy they are in a broader sense than the baseline requirements.
Uncertainty and doubt is great at making people dizzy and muddying the waters, but compared to other CAs who respond with openness and transparency, the contrast is striking.
The Gish gallop is a rhetorical technique in which a person in a debate attempts to overwhelm their opponent by providing an excessive number of arguments with no regard for the accuracy or strength of those arguments.
I was heartened by how many respondents were like, "yeah, even if the original concerns amount to nothing, the behavior here alone is enough to justify revocation, there's no place for that kind of evasiveness in this process"
I did and don’t feel like Rachel’s responses were sufficiently considered because of the prevailing “ugh wall of text tldr” sentiment. She may be partly to blame for that, but people asked for answers and didn’t read them when given! I don’t think TrustCor’s situation is something to joke or gloat about. Someone’s entire business just got ruined because of, if Rachel is to be believed, something wholly unrelated to operation of the CA and arguably entirely out of their control.
It was purely evasive. Half of what she wrote was statements that she can't answer that, can't speculate, doesn't know, or her lawyers advised her not to answer.
The other half of her text was her answering questions that were never asked.
At no point did she answer the questions that were asked.
She was repeatedly asked about the corporate structure and shared investors. The answers reached from claims that it'd be too much content to summarize in this forum, claims that lawywers advised against it, claims that answering truthfully would lead to tax obligations, to claims that she doesn't know and would have to speculate.
Instead, she spent lots of time explaining the remote-work arrangement in Canada (irrelevant), the upstream networks of their Phoenix datacenter (irrelevant) or the tax situation of employees/contractors working for their company (also irrelevant).
>It was purely evasive. Half of what she wrote was statements that she can't answer that, can't speculate, doesn't know, or her lawyers advised her not to answer.
To be clear: if you don't know the answer, it's not evasive to say "I cannot answer that" or "I don't know the answer" or "giving an answer would be speculation on my part".
Yes but the OP was "I did and don’t feel like Rachel’s responses were sufficiently considered" - and it's hard to consider answers which are "I can't answer".
That's exactly my point. People asking the questions are awfully entitled to answers that fit their narrative or expectations of the events. Saying "that would be speculation and I'm not going to engage in speculation" is a direct answer, and a rather mature one at that. Nobody is entitled to somebody else's speculation. But in this thread it seems that answer wasn't appropriate. Again, my point is that Rachel did answer but everyone immediately dismissed her answers and her being passive aggressive when in reality she seemed to be acting in good faith and divulging as much as she could responsibly and factually.
Anyone responsible for running a CA with a root cert should be willing to answer any reasonable question that the public can dream up. The fact that they were unwilling to provide basic information is a huge red flag.
She was evasive and dishonest, and omitted answers to the real questions that needed to be answered, and tried to cover for that by inventing concern for responsible disclosure inappropriate for the forum.
I didn’t read it all, the Canada and data center explanation was in response to questions about addressed in the audit and where the business is registered. There was a few strangeness but no smoking gun still maybe CA should be beyond reproach
Exactly. It's not like Apple would happily explain to you how their tax haven is setup if you but simply asked out of concern about whether they should be trusted to own everyone's phones and push the root CA list in the first place... ... ...
The rejection, in the end, was because "it is unacceptable for a CA to be closely tied, through ownership and operation, to a company engaged in the distribution of malware", as Kathleen puts it. I read through all of Rachel's walls of text, and nothing there dispels the referred-to connection.
And it seems like Mozilla's concern is: "Is it in Mozilla's users interest to have this CA in their trust store?", which is probably the only sane way to run a trust root program. That means any concerns about the well-being of the company which runs the CA must be ignored.
I was sympathetic to Rachel at the beginning, but nothing that she said in ever resolved the main concerns:
* TrustCor owns MsgSafe.io, which somehow incorporated an unobfuscated and customized version of the malware. She kept trying to distance the CA from MsgSafe, but as Kathleen points out that's disingenuous given that Rachel herself is both VP of Operations of the CA and Director of Operations of MsgSafe.
* TrustCor was, until recently, owned by the same holding company that also owned Measurement Systems, which produced the malware incorporated by MsgSafe's app.
If these things were not true, they should have been pretty easy to debunk. If they are true, I agree with Mozilla that they provide sufficient reason to revoke the CA. The ties to the malware are just too close to risk leaving them trusted.
For me, there was zero “ugh wall of text tldr” and a whole lot of “that wall of text I just read is obviously designed to obfuscate and hinder discussion” followed by noticing that she relentlessly kept pasting the SAME irrelevant/misleading walls of text over and over. She seemed to be actively trying to provoke the “ugh tldr” response as a tactic.
An example is the bit of boilerplate that kept repeating how the app was BETA and could only have gone to a small percentage of people (not relevant), and it wasn’t ever “published” (which is a lie) and how sdks get updated all the time and it is not our place to speculate about how that might have happened. This is an extremely abridged version of this “argument” that she trotted out repeatedly just trying to tire people out. There are many other examples of this type tactic in her responses. The overall effect is that she very much appears to be arguing in bad faith.
She actually did reply to that directly. The response from a few others saying "you didn't reply to that" was actually false and a clear indication that they didn't read her response.
> I did and don’t feel like Rachel’s responses were sufficiently considered because of the prevailing “ugh wall of text tldr” sentiment.
I read it all, it was extremely repetitive with little substance. When she did say something new/interesting it was cagey and often not direct.
> She may be partly to blame for that, but people asked for answers and didn’t read them when given!
Disagree. They read it and ignored the fluff that said nothing of value.
> I don’t think TrustCor’s situation is something to joke or gloat about. Someone’s entire business just got ruined because of, if Rachel is to be believed, something wholly unrelated to operation of the CA and arguably entirely out of their control.
And that's the thing, she isn't to be believed, I sure as heck don't believe her. I posted in another comment a clear lie about linking to the android app (not to mention the constant "It was never in released" BS is so mealy mouthed it isn't even funny). The same parent company owned the CA and this msgsafe, it also seems clear there was a little more to the "Measurement Systems" relationship than Rachel would admit. Sorry, that's way too many coincidences. The final nail in the coffin is the ties (even if they are fragile) to Packet Forensics which you wouldn't want in the same breath as CA yet the connections are hard to ignore given everything else Rachel said (or didn't say).
I also don't for a second trust all the:
> I was reminded by some of you this is a big public forum with non-CA-operators and non-browser/platform-developers present, and that participants have a lot of interest in these topics but not always the same level of experience or familiarity with the CA operations and root CA program guidelines or technical knowhow as the intended audience. Therefore let me begin by saying THANK YOU to my fellow CA/B Forum members and members of the larger community for reminding me of that, and separately thank you for those of you that have sent very nice and encouraging, supportive emails (you know who you are)
Yeah, my girlfriend goes to another school, you've never heard of it... Okkkkkaaayyy.
Then look at their "Why are you persecuting us!" line about other CA's, which just screams guilty. On top of all of that their msgsafe system appears to be a fraud (it's not E2EE despite their claims) so if they are acting that way with one arm of their parent company why should we trust the CA (who's auditor looks questionable and also the fact they've used the same one every year)?
Don't you think that it would be better if certificate authorities were limited to certain TLDs? For example, a company from country A limited to A's national TLD? This way even if they are ordered to produce a fake certificate, they can only decrypt traffic for country A's sites.
Also browsers should display a warning when a certificate issuer for a website changes.
I think that a good system would be that the key can be signed by multiple parties. For example if you are running a web shop, your certificate could be signed by Visa if you accept card payments, by your registrar to prove that you own the domain, by the tax office in your country to prove that you are a real company and so on. And then the browser would show these signatures as badges when clicking/hovering the padlock icon. So essentially the more signatures the more trustworthy.
Right now, CAs in the Web PKI certify authenticity, not trustworthiness: when the system works correctly, a successful TLS handshake with satan.com tells you that you have connected to the actual satan.com and perhaps that it is operated by Satan, Inc (although it’s still on you to check whether it’s the Delaware one or the Kentucky one[1]). It does not, and is not supposed to, tell you whether it’s smart to sell your soul there, CA marketing materials (“users trust sites that ...”) notwithstanding.
What you are proposing is an attempt at solving an entirely different problem that is, furthermore, largely orthogonal to ensuring the integrity and confidentiality of Internet traffic. It would be nice if that problem were solvable, but every time the Web PKI flirted with that it ended horribly[2]. (And now the EU authorities are trying to force it anyway, it seems[3].) Ensuring your bytes end up on the host you named, intact and unsnooped, is a comparatively easy problem that’s also pretty meaningful, and it seems smart to restrict ourselves to that. (I’d have advocated for DNSSEC+DANE instead of CAs, even, if I had any confidence at all in the DNS registrars’ ability to handle key material and willingness to give up domain control when that ability fails.)
Unfortunately TLS is already conflating two problems as it is:
1. Is the opposite party who they claim to be?
2. Did a third party alter or eavesdrop on the transmission?
That is, am I simply being phished in private or is the government also listening to me being phished?
It's trivially easy to solve the second problem and it doesn't require any PKI infrastructure whatsoever and we delayed secure transport by decades by tying it to the generally less-useful question in #1.
There are few entities online I trust more than I would trust someone impersonating them: basically only if money is changing hands. But I don't need to know that e.g. ycombinator.com is actually ycombinator.com because I don't trust ycmonbinator.com any more than I would trust someone impersonating ycombinator.com.
It is not trivially easy to solve the second problem without solving the first problem. If you don't have proof of identity, all someone has to do to eavesdrop on you is pretend to be the person you're communicating with while actually communicating with them on your behalf.
It's the difference between a passive attack and an active attack. Most people think only of passive eavesdroppers, and against them it could be true that "it's trivially easy to solve the second problem and it doesn't require any PKI infrastructure whatsoever". But against an active attacker, unless you can solve the "is the opposite party who they claim to be" problem, it's trivially easy to eavesdrop by pretending to be the other party (MITM attack).
And then you find out there are ways to convert a passive eavesdropper into an active attacker (remotely injecting packets to manipulate the TCP connection state, based on observed sequence numbers), and the distinction becomes a bit fuzzy...
It's trivially easy when that 'proof of identity' is purely a unique identifier shown to the user, and not actually tied to real-world identity. This is what a domain name is, ICANN ensures people can't register a domain name if someone already owns it, so with that guarantee, everyone trusts that 'google.com' showing up in their URL bar means that the certificate presented by the server they're connecting to is actually authorized to show 'google.com'.
That proof of identity still needs to solve the first problem... you still need to somehow prove that someone who says they're google.com is actually google.com.
B) why having an untrustworthy root CA is the most calamitous outcome possible.
Because the algorithm that yields the answer to your question really only answers the question "Does the cert I was handed cryptographically validate back to a trusted root?"
If you can pwn the routing/DNS sufficiently around an AS, you could MITM all the authoritative sources, and with something like TrustCor, willing to issue duplicitous root certs and cert chains, while being trusted by major OS and software package maintainers, you can, in theory, pull the old switcheroo, and the software will still tell you "seems legit, hoss".
This is part of why the Internet is architected as a "collective network of networks composed of Trusted Agents".
At the end of the day, you're depending on every middle box routing your packet to do the job to get it where it needs to go.
The type of Trust you're looking for is incompatible with technologically facilitated networking in a sense. The network cannot prove the trustworthyness of the network. Trust has to start outside it.
The trust we've rooted from is from the integrity of the institutions we've established. Something that has been eroded more and more over time as the Internet infrastructure has slowly been civically infiltrated to shape it into a surveillance tool by governments.
If you're feeling heartburn over this, the problem isn't the Internet per se. It's your Government, and your institutions, and for places that like to think they are populace driven, the people around you.
1. registering google.com requires it being available, and their registry and registrars prohibit registering already-registered domains
2. these registries contain the authoritative DNS records for google.com
3. when anyone wants to pull a certificate for `www.google.com`, they first need to create a DNS TXT entry called "acme-challenge.google.com" (for example)
Because of this, you know that whoever 'says' they're Google.com actually is Google.com because of how domain names work and because you see the lock icon in your browser. It's all built on trust of registries and registrars operating diligently, and it's checked by pretty much every big company watching for changes to their domains in case of malicious actions.
That's a proof of identity. I think you're fixated on the notion that somehow "identity" can only refer to some kind of legal entity, but "identity" in the WebPKI sense is "control over a domain name"--and WebPKI exists to facilitate communication of proof of that identity.
No. A PKI-less system can ensure I am communicating with one and only one party. That's all most comms need. If I want some assurance that party is who they claim to be that can be negotiated over that channel.
> a successful TLS handshake with satan.com tells you that you have connected to the actual satan.com and perhaps that it is operated by Satan, Inc (although it’s still on you to check whether it’s the Delaware one or the Kentucky one[1]). It does not, and is not supposed to, tell you whether it’s smart to sell your soul there, CA marketing materials (“users trust sites that ...”) notwithstanding.
Well we are certifying authenticity but we do it with the trust of the CAs. But if we can't trust the CAs maybe there are other authorities we can trust more, or that we at least can share the trust between several parties.
But I agree with you on the sentiment that stopping snooping technically is the easy part and this is already possible today with Public Key Pinning, DNSSEC, client certificates and so on.
But that's why I think the role of the CA should change so that the signatures actually gives some form of tangible trust that people can relate to.
> But that's why I think the role of the CA should change so that the signatures actually gives some form of tangible trust that people can relate to.
But this is a tough thing to do. The best way to do this would be to introduce a new type of certificate, that can be stapled onto web requests maybe (eg. a signature at the end of the http body), that isn't controlled or authenticated by any existing system. This would allow clients to implement this system based on merit and based on whether or not it solves problems and concerns these clients have. For example, if it's literally just EV certificate boogaloo[0], then Chrome can choose to not implement it.
But when you try to force it onto people in the form of QWACs or what have you, and have it government-mandated, this goes against the theory of 'open source' standards being adopted because they're a good standard, and instead degrades user experience by forcing client vendors to implement them regardless of their flaws.
It is obvious that inventing the CA business was a mistake, in retrospect. Netscape wanted nothing to do with it in practice.
What we should do is to delegate the few CAs that legitimately do non-domain validated certicates for specialized applications to only this. That includes things like signed code. Things like extended validation was invented solely to print more money and should be phased out of web browsers. They are not "more secure" and should not be regarded as such in any user interface.
The parties to issue domain validated certificates should be the domain registries. They do their job reasonably well already. They are already tasked with issuing domain ownership. It is only natural that they should validate this ownership cryptographically as well. It is a trivial extension to their business, and the registry/registrar model can be kept intact.
The trust chain for a domain validated certificate today includes both the registry and the CA, and either can fail. The risk is strictly minimized by removing the CA from the equation. CAs provide no value to the end user.
You will find several high profile people arguing against this. Do note that every one has vested interest in the status quo, either directly or indirectly by having CAs and governmental agencies as their clients.
How do we change this system? Mozilla has by way of their history some weight in these matters, and several capable people on board. It is my intense hope that they have a long term plan. Let's Encrypt is, as great an achievement as it is, still built on a broken trust model. But it could also be an excellent beachhead into a strictly better trust model for the end users.
> You will find several high profile people arguing against this. Do note that every one has vested interest in the status quo, either directly or indirectly by having CAs and governmental agencies as their clients.
For example, Azure's Key Vault has built-in certificate issuance automation capability, but only with two for-profit CAs: DigiCert and GlobalSign.
> Things like extended validation was invented solely to print more money and should be phased out of web browsers. They are not "more secure" and should not be regarded as such in any user interface.
I think it would be great if something like EV certificates were available from national governments.
We have pretty solid digital ID support in Austria, but all the tech for signing and authenticating documents (useful for invoices or account statements) require special software, and aren't built into web browsers and email clients that people use.
It would be nice if I clicked a link in an invoice email, if I could check that aws-billing.at is indeed a domain that belongs to "Amazon Web Services" registered in Austria or if it is a phishing attempt from a script kiddie in a foreign country.
That could be usable for certain specialized applications, such as the authentication of documents you mention, but for not authenticating web sites.
For domains this assumption been proven wrong in practice several times. There are too many issues with almost identical names, or names that merely look identical but aren't, or just the difference between "Amazon Web Services Inc." in two different jurisdictions.
Troy Hunt has made several long blog posts with some convincing real world examples.
It is easier for end users to see which is more reputable of "amazon.com" and "amaz0n.biz", than it is to value "Amazon Inc." against "Amazon Cloud Services". It is not that the CAs are doing a bad job. It's that domains are the identity we really care about.
Furthermore, I am of the opinion that CAs should be destroyed.
> Don't you think that it would be better if certificate authorities were limited to certain TLDs? For example, a company from country A limited to A's national TLD? This way even if they are ordered to produce a fake certificate, they can only decrypt traffic for country A's sites.
I think it would harm users relying on certain sites. This already happened with CNNIC and .cn, so I don't think that mandatory restrictions would make it better (and besides, the only CAs which voluntarily ask to restrict issuance to certain TLDs are government CAs).
In 2015, CNNIC have impersonated Google's domains (which while Google have a minimal presence in China is still a faux pas).
Also to clarify: restricting it to specific TLDs where the CA physically operated would amplify local laws, while restricting it to TLDs that are outside the CA's physical presence would basically mess up enforcement and would spur governments to curate a mandatory sanctioned list (heard of EU QWAC?) It's a lose-lose case either way.
I’m aware of the incident, but I don’t understand your reasoning. Had CNNIC been restricted to issue certificates only for .cn domains, the incident would have never been possible. That should be a supporting evidence for CA name constraints.
I don’t understand the “amplifying local law” part. How is restricting the power amplifying?
> For example, a company from country A limited to A's national TLD?
That's not an example of limiting CAs to certain TLDs; that's an example of limiting certificate users to a single CA. I'd sooner use a self-signed certificate than one signed by a monopoly government-controlled CA.
But it could backfire in the other direction. Lets say I trust AWS and Let's Encrypt (the two CA's I currently use for my domains - that was the only reason I picked those two, both HQ'ed in the US), would I then be forced to used a Spanish CA for my .es domains or a Tuvaluan CA for my .tv domains?
What I don't get is how this wasn't thought up to be put into the SSL/TLS standard when it was built. How did we wind up with infinitely powered root certification authorities?
There is the part where the usage of X.509 certificates for DNS names on the Internet, as opposed to in the context of the (nonexistent) X.500 directory, is a gigantic hack[1]. This means the definition of “when it was built” is rather hazy. Also, the 1994 Netscape implementation literally accepted a CN=foo.com certificate when connecting to bar.org, so the state of SSL when it was built is not exactly a stellar reference.
Still, the name constraints extension, which restricts all certificates (transitively) issued from a given CA to a given DNS subtree, has been in the “Internet profile” of X.509 (PKIX) since the December 1996 draft[2]. The problem from this technical point of view is that very few implementations supported it until a couple of years ago[3].
Is this actually part of the standard? AFAIK TLS allows/demands to authenticate one or both sides via certificates, and defines a mechanism to delegate trust via certificate chains. It does not, AFAIK, define how trust is established, and my guess is that the standard authors realized that this is an infinitely complex topic that should not be intermixed with the technical and cryptographic side of the problem.
It just turns out that delegating trust to root CAs, CAs, and browser/OS vendors (the latter via built-in certificate lists) makes it easy for the end user.
TLS itself has no opinion about how certificates work. AFAIK it would be totally fine by the standard to put a JPEG photo of a your primary school certificate for 10m swimming where certificate goes in the protocol. If the other party is OK with your proof that you can swim to secure the connection, all is good.
Netscape invented all this stuff in the 1990s as SSL. Turns out you need a PKI to make it work, because of a tricky edge case which otherwise makes the whole thing worthless. So, they used the existing but little used X.509 PKI left over from the X.500 directory work, even though the Internet is not part of the envisioned global network X.500 is for. The X.509 PKI had a bunch of famous brand "trustworthy" companies minting certificates.
PKIX, an IETF working group to figure out how to force X.509 to be suitable for the Internet, adds stuff like SANs (Subject Alternative Names, a way to express Internet ideas about naming like IP addresses and DNS names) but that all happens after SSL 2.0 and SSL 3.0 and people start writing https URLs.
> It just turns out that delegating trust to root CAs, CAs, and browser/OS vendors (the latter via built-in certificate lists) makes it easy for the end user.
This flexibility is what allowed let’s encrypt to bootstrap, right?
The same reason that lies behind many problems in IT: a lot of the early (and now still foundational) protocols date back to times where the participants in the network were universities or other large entities (governments, large corporations) and trust was assumed between them.
Obviously, that broke down over time, and nefarious actors (both governmental and private) popped up.
That isn't the only reason. Another big one is that the problem itself is hard, so noone could come up with an easy solution. How would you establish trust in a world without root CAs?
Peer-to-peer? That would require you to constantly monitor the activity of those peers and update your lust of trusted certificates.
Singular trusted entities like browser or OS vendors? We already have that. Browser/OS certificate list are currently the entry point and have even higher priority than root CAs -- that is what the article is about.
Government bodies are an easy-to-go "trusted" entity because, while you may or may not actually trust them, you are forced to "trust" them in many aspects of daily life anyway. You are forced to rely on the fact that the police doesn't randomly knock on your door and arrest you. Governmnent bodies may (depending on your country) simply demand higher taxes than you owe them, and you have to fight a lawsuit to avert that. Government bodies can easily falsify evidence that makes you lose your house because it now says that somebody else is the owner (lawsuit again).
Basically, establishing trust in a matter of "I am actually talking to the legitimate server behind example.com and not a MITM" is easy, that could be solved by DNSSEC and cert pinning (although for legacy reasons, we went with CAs as the middle man).
But the really interesting thing, which is unsolved to date, is making sure that when you type "bank.com" in a browser, you have some sign that you are actually talking to Bank LLC and not some other entity.
In ye olde times, you didn't have to worry about someone else spoofing the domain name of "bank.com", e.g. as "b4nk.com" or "bänk.com", for one because access to registering domains was complex and expensive, and for one because criminals hadn't found out they could make money...
I feel like the underlying cause is that we give a significance to URLs that they should not have. When, in your example, I want to visit the website of Bank LLC, I have to know that its URL is bank.com (right now I usually know this through Google), then enter that and then have to deal with the problem you describe.
This would be better solved if the URL as a middle step did not exist can I could directly select Bank LLC. Right now we rely on google to shield us from domain name spoofing.
But then, unless we exchange keys with an entity beforehand (1), how can I even express that I want to visit that entity's website? Let alone solve the spoofing problem. We have to delegate trust to search engines, browser built-in lists, governments, company registries, whatever.
(1) possible, but only works if you know in advance that you want to communicate with a specific entity. Like before opening a bank account. I does not help when you don't even know which banks make an offer you are interested in.
The current PKI is geared toward ease-of-use and adoption.
It can also be turned completely around: only trust a single root certificate. This design is often used in client authentication: each client need to get its certificate signed by the one single CA that's trusted by the server.
Right but implementation wise we've never implemented as a default anything else. Domain suffix pinning is available as an extension, but that's all it is - very few TLS stacks support it and that's unlikely to ever change.
In context, it's pretty clear that that line is code for "it's been around for a while and accumulated a lot of cruft that I didn't know about until just now".
> Upon further investigation into our domains and with additional information from our legal team, we have found that TrustCor acquired the DecoyMail system many years ago as the basis of our MsgSafe.io product and service. First available in October 2000 (over 22 years ago), the DecoyMail product (and its successor, our MsgSafe.io) is an incredibly sophisticated and sufficiently complex system with many components. A single component of MsgSafe.io allows domain names to be conveniently purchased through the software’s web interface which triggers a backend domain-registration 'register' mechanism that is pointed to an API or registrar account.
It's also the exact opposite of what you want in a certificate authority. You want simple, secure, and dependable, not complex, hard to verify, and easy to game.
I’m pretty disappointed by this outcome. I read the entire thread. If you skim the thread and just read the sensational responses then it paints a pretty grim narrative of a CA that cant be transparent. But if you actually read Rachel’s responses she responds to every single accusation several times over. Yeah it’s a word swamp to wade through and the repeated context was annoying, but her company is under attack… what would you do? Would you not try to provide as much rebuttal as possible?
It’s pretty… idk… irreverent to take the stance: oh gosh look at this person squirming under pressure to defend themselves why would they do that they must be guilty and malicious. I can smell it! It really feels like a perversely inappropriate forum for discussing a complicated issue like that. Rachel wasn’t trying to deceive so much as she was tying to set guidelines for appropriate review of the material at hand. And then the comment at the end to her when she was simply trying to make sure the transition happens cordially without breaking existing certificates as seems is the intention, “why does Microsoft need to answer to you?”, is just snotty on a whole new level. Honestly it makes me think we need special courts and some sort of process to handle this stuff because people don’t have time to… process.
I am probably in the minority here, but I can’t help but feel like Rachel was a victim of a sloppy but effective smear campaign. I suspect the outcome would have been different if this was handled in a court of law.
To me, it looks like Kathleen Wilson from Mozilla did a great job sorting undisputed facts from all the noise in https://groups.google.com/a/mozilla.org/g/dev-security-polic..., and the noise that came from Rachel mostly served to obfuscate that these facts remain facts, and leave removal of the CA as the logical conclusion.
Based on that, it seems like the "rebuttal" is trying to rebut irrelevant things, leaving the undisputable elephant standing in the room. (I admit I haven't read the entire wall of text.)
> I suspect the outcome would have been different if this was handled in a court of law.
If it was a criminal court trying to prove something beyond reasonable doubt, certainly. For a CA, it's the other way around, there needs to be strong evidence that keeping the CA is beneficial to the users.
I agree that Kathleen’s response (tone, articulation, scope) was on point. Objectively, it does become hard to justify the value of a CA when there’s a mob of people questioning the value. In a very raw sense, this is probably the most user-centric outcome. So I will sleep on that.
But in a process sense, I am left wanting. I still don’t know what damage was done and why TrustCor CA got this special treatment in the first place in any way material to their CA issuing business, which they appeared to put great effort into operating by the books.
My read is that Mozilla were much more concerned about the shared ownership and operations with Measurement System, than the presence of the malware. I think we can agree that you can't be doing crimes under one company name and simultaneously operate a trusted CA under another?
I do agree that we shouldn’t allow something that overt.
But, if I read correctly, Rachel claimed that there was no longer any shared ownership and tried to explain that ownership in the sense that the word was being use was not a correct term in the first place. I believe she said it was a shared incorporation services / legal council / investor, at most, and that the speculation as to that relationship conferring any authority pertaining to the CA’s operations was entirely incorrect since the executive authority had long since been signed over to actual company officers.
I read the full thread (except for paragraphs where she pasted from previous responses).
She failed to reasonably and convincingly refute some allegations. There were repeated requests to provide information, some of which would be trivial to produce if acting in good faith.
After reading the exchange, I (as a reasonable bystander with no material interest in either side):
* Don't understand the relationship between TrustCor and the malware distributor in a clear way that company ownership records would provide
* Take it as a false statement that the mail service doesn't have apps, as its website advertises them
* Don't understand how their auditor audited them when they don't appear to have a presence in Canada that would be factual based on the extracts from the auditor findings
Unrelated to her responses, I could take in on faith that a rogue developer added spyware from a company with the same owners, but the finding that the payloads were send to TrustCor servers diminish the acceptance that sufficient controls exist in the company to not question the security of them as a CA.
Re: your last point: I find it especially concerning that all the questions about TrustCor's apparently compromised server were answered with, "MsgSafe's and TrustCor CA's infrastructure is separate". The concern was that TrustCor's practices led to their servers being compromised, which isn't a great sign for a company which operates a CA, even though it wasn't the CA servers themselves which were compromised. Nothing Rachel wrote indicated that the CA servers are operated in a more secure way than the MsgSafe servers, nor that they have changed any practices in response to the compromise.
"no longer any shared ownership" was asserted, but never backed up because (it was claimed) issues with getting legal documents updated in a timely fashion.
Combining that with basic questions about how exactly ownership changed that were never answered and instead obfuscated behind reams of "nothing speak".
The final basis for the determination seems to be that the main loss of from distrusting the TrustCor CA was thier sibling company's private email service that is, at best, advertising itself under a very shady definition of E2EE.
Thus this seems like an easy decision to me.
The interesting conclusion that follows from that is that if you are going to operate a shady CA, it behooves you to find some large clients to make cost of revoking your trust higher.
>The interesting conclusion that follows from that is that if you are going to operate a shady CA, it behooves you to find some large clients to make cost of revoking your trust higher.
...Which in essence means CA's probably shouldn't exist as a standalone thing, and everyone should learn to build their own trust networks. None of this vouch nonsense, or Trust theater.
But she never said who actually owned these companies or how they were related, and said doing so would lead to tax problems. That was rather suspicious.
I have no problem saying that if your ownership structure is such that your lawyers or accountants have advised you not to reveal it publicly, you should not be in the CA business.
Apple runs a bunch of crap through a tax loophole in Ireland. Should they be trusted running the entire mobile ecosystem that underpins all of this in the first place? I actually agree that shady companies shouldn't be swept under the rug. But I don't agree with the hypocrisy of singling out some random CA for doing things that most every other company out there does because we lack the backbone as a society to put a stop to the shadiness.
If they are transparent about what they're doing, then it's not the same case I was talking about.
I can't see Apple saying "Well, on advice of our lawyers we can't actually explain our corporate structure to you." Is it a secret that they have a corporate entity in Ireland, is it a secret what they do with it? Or is it public knowledge that they don't hide?
So I wouldn't describe secret ownership structures as a thing "most every company out there does." But I'm not going to say Apple doesn't do unethical things. (Also is Apple even a trusted root CA for mozilla or microsoft browsers?)
I think non-transparency is an even higher level of problem for a CA. Secrecy about your corporate structure does not seem okay for a CA -- we need to know who they are and who controls them, non-negotiably. Secrecy of corporate structure does not seem like a thing most every company (or every CA) out there does.
But it's quite possible Apple should _not_ be trusted to "run the entire mobile ecosystem" that uses Apple products. You can make that argument. And we can talk about what the heck any of us can do about it individually or collectively if so. That's a different question than who should be allowed as a trusted CA root, or who Mozilla or Microsoft should allow as a trusted CA root.
When you say "that underpins all of this in the first place", I'm not sure what you mean; Mozilla and Microsoft trusted CA roots effect people who aren't doing anything with Apple products, Apple does not in fact "underpin" the entire SSL CA system in the first place. I don't know what to do about the Apple ecosystem if Apple can't be trusted, but I support Mozilla, Microsoft, or anyone else removing trusted CA roots belonging to companies with secretive corporate structures, ownership, or governance. All of this can be true. Apple doing unethical things doesn't mean mozilla or microsoft should allow a trusted root CA with secretive corporate ownership structure.
Sure. The Apple stuff is just an example, I don't mean to suggest they're a CA, but they are trusted to ship the list of CAs that you trust to your devices as are MS and Mozilla, so the exact same question of "should we trust them if they are a corporation of questionable ethics that do the same sort of tax things" exists and is apropos. Why is there a double standard? I find it rather inconsistent that we're going after some "shady" CA for essentially not being forthcoming in response to allegations that they consider false and have no duty to set straight without material proof that the allegations are to be taken seriously, and who look to be the target of a journalistic smear campaign involving forming similarly named corporate entities in the US to try and extract private information about the company via extrajudicial means. I mean why stop with TrustCor? Let's deploy the arsenal! Let's examine the interests of all parties funding all of the systems we trust in society. Seriously. If we're going to give a shit about something why is it some CA nobody's heard of where there is absolutely zero evidence of non-compliance with the required CA processes? Why spend effort on this? It's hardly news that companies try to minimize tax liability by structuring themselves in advantageous ways. What, pray, is a hallmark of a trustworthy company? Perhaps the public should vote on CA inclusion in the root trust list. Fuck the CA oligarchy.
To be honest, it sounded like Rachel herself did not know exactly how the company ownership was structured. It seemed obvious that it was a US company that incorporated abroad for some reason, and that alone is pretty sketchy. It looks like they are trying to hide who actually controls the company. That alone should be reason not to trust them.
It's not a strawman. Literally we're saying "you see TrustCor CA didn't do anything wrong by the books, but we can't trust them anymore because they can't articulate their corporate structure on demand after scandalous allegations". Well, I simply ask people to consider how any other corporation in the same situation would response. My bet is they'd also be less than forthcoming. And my example is Apple, who we know exploits tax loopholes via complex corporate governance structures, who everyone seems okay with trusting. It just doesn't make sense to me.
Apple is a public company and it's very clear who owns and who controls the company. They're a multinational company that consists of multiple legal entities, and it's generally not a secret who you are doing business with.
TrustCor is a company that looks like a front for a Spyware maker, and when asked about that they say: "It's not like you think, but we don't want to tell you what the actual situation is, so you'll have to trust us, it's fine! Also the spyware we were caught distributing is totally not our fault, it's from a contractor in a completely different business unit and is totally independent from our CA business, but again we can't tell you more because it is secret. But trust us, the CA business is completely legit. And the sketchy things you found were all the idea of a guy who passed away recently, so we unfortunately can't ask him why he did it, but it's all legit don't worry trust us."
> I think we can agree that you can't be doing crimes under one company name and simultaneously operate a trusted CA under another?
Playing devil's advocate: Why not? I mean yes, obviously if you end up in jail that might interfere with your ability to operate a CA (or any company for that matter). But barring that, as long as they haven't done anything to affect the security or proper operation of the CA certificate itself, why is that a basis for removing them from root stores? To the best of my knowledge this action is unprecedented.
> can you trust an entity in one context when they have proven themselves untrustworthy in another
We do that all the time. If, rather than TrustCor being associated with a company making malware we'd instead found out the company's CEO had cheated on his wife, would that be grounds for removing them from the root certificate store? Context matters.
Why the ad hominem attack and call security researchers, professors, professionals and employees from Apple, Mozilla and Google "mob"?
"TrustCor CA got this special treatment"
I'm not a regular on that mailing list, any source that this is special treatment and other CA that are spyware software and snakeoil encryption software creators etc. are treated differently?
There is no ad hominem attack. And, I mean find me a company on the global stage that isn't optimizing taxes using offshore holding companies. If that's too shady to be allowed for a CA, then we shouldn't allow Apple to do it either.
The security BS was being sold by a sibling company, heck, the person responding is a high up in both companies. And there is a lot of evidence of them being connected to the malware vendor.
If they can't rebut those concerns/connections in a clear and convincing way, they have no business being a CA. If you are satisfied with the answers, more power to you, but I honestly don't know how you could be after reading through those emails.
It's frustrating because you're just repeating the same drivel other people who don't have the situation straight are. Nobody related to TrustCor CA is connected to a malware company. That's factually incorrect. They are connected to an email privacy company which offers E2EE email but which, for product reasons, doesn't enable it as the default when you create a new account. The alleged malware company and the email company were historically related when they were born because they shared an investor. But that is no longer the case.
No, the person asserted that they aren't connected, and then offered lots of words about how they aren't connected, without actual good explanations as to why we should believe that assertion.
So, what you are saying is that they just happened to have the same investor, the malicious developer that they say worked for them just happened to include malware from that company (Unobfuscated, unlike every other example available), said developer was able to route traffic through the company domains, just happened to have identical corporate officers, and just happened to be related to a company that brags about being able to bypass SSL?
Let's just say that there is enough there they better have a very clear explanation about it, and instead they just deflecting deflecting deflecting or refusing to answer. I'm sorry it is bad for their business (assuming they actually are innocent of all this), but that is not an appropriate response for a CA when someone is asking legitimate questions based on legitimate suspicion from what would have to the world's worst series of coincidences.
Whenever you push that false TrustCor narrative, I will answer with the question that has not been answered: Why did TrustCor have the source code of the spy ware no-one had?
I think showing empathy is good and important. Responding to accusations on a public forum is understandably stressful, I could understand how it's hard to stay entirely placid in that situation. And I strongly agree on inflammatory comments like the Microsoft comment at the end, which do nothing to raise the level of the discussion.
However, I think it's helpful to consider public comments separately from the responses of browser vendors. I think they did an admirable job of keeping the contents of their messages calm and focused on establishing uncontroversial claims. In no way are Apple or Mozilla's responses trying to make the person squirm, or trying to 'smell the guilt'. Mozilla's final assessment rests on TrustCor's quantifying value statement in light of the MsgSafe.io findings, i.e. the close tie of TrustCor operatives with this malware operation.
The legal system is hardly a panacea. Legal battles can be made to last many years. And a court of law has no ground to litigate on questions of trust in the first place.
The forum was able to establish a list of important and uncontested claims in a few weeks of strenuous discussion, and their assessment of the benefits of keeping TrustCor vs. the risks seems reasonable to me. Third-party inflammatory messages about Microsoft notwithstanding.
I do agree with your assessment of the official responses. I’ll admit I may be overly sympathetic towards Rachel, but after reading her responses, I was left wanting a more substantiated resolution. It’s hard for me to trust a “value judgement” in the middle of a riot, so to speak.
> Responding to accusations on a public forum is understandably stressful, I could understand how it's hard to stay entirely placid in that situation.
Which is why at that point you hire lawyers and a corporate communications agency to do this for you. When your company's existence is on the line, you don't want to do that stuff yourself.
Which is very hard when given an exploding timeline of a few days to respond… Honestly I felt like some of her responses were crafted with input form people with legal training and that’s exactly what turned people off because everyone knows lawyers can’t be trusted rite.
The CA forum is designed to be a forum of trust, on eye level.
The fact that her responses read like a letter written by a bad lawyer already violates that trust. She's saying as little as possible, admitting nothing, and constantly trying to evade claims on a technicality.
I guess to me that behavior also makes sense if one has nothing to hide and is unfairly being asked for a bunch of info that isn’t appropriate or relevant to provide because of absurd unsubstantiated claims that they are in bed with malware authors by a journalist who seemingly has an agenda.
Interesting way to describe a situation in which company A had the same owners as malware company B and also integrated a never-before-seen unobfuscated copy of company B's malware into company A's app.
The claims are neither absurd nor unsubstantiated.
Rachel admits that TrustCor (company A) and Measurement Systems (Company B) had the same investors:
> Unknown until recently by any employee officers of TrustCor we and Measurement Systems S de RL had in common a group of investors who represented funds (groups of companies and other funds), not individuals.
She argues it doesn't matter because those people don't own TrustCor anymore, but can't or won't provide any details about the supposed ownership transfer.
Rachel also admits that the supposedly secure email app owned by TrustCor (which lies about being E2EE) had Measurement Systems' malware built into it, which she blames on a rogue employee:
> Prior to my original reply, we had already completed an investigation related to this activity. Our software revision control system revealed immediately when the software was introduced and which developer introduced it. [...] Also as I previously stated, "Whether or not the SDK was added for a developer’s own financial gain or otherwise is beyond us and we don’t care to speculate." Our investigation found the developer in question properly signed our standard "Confidentiality Obligation and Invention Agreement” that requires any developer to obtain a corporate license to any 3rd party software or intellectual property the developer chooses to include. We confirmed through corporate records and email searches that no such agreement was ever obtained by the company or company counsel. Also, none was included inside the software/check-in to revision control. We also confirmed no approval for including this third-party software was ever obtained from Wylie (technically the manager of the developers at that time). Technically that individual developer violated our Confidentiality Obligation and Invention Agreement.
So, which part of what I said is not actually true?
I could be mistaken, but TrustCor CA is company C. TrustCor (company B) is where rouge contractor allegedly added "malware", or in industry parlance, analytics software, to a product as part of work to instrument the app that never shipped publicly and thus never harmed users. But TrustCor CA is operated entirely independently from Company B. Furthermore, this entire thing is predicated on an allegation that because some piece of analytics malware appears unobfuscated in their app but obfuscated in others, they must have exclusive access to the source and therefore must be the authors. That's.. quite the leap. I can think of many other simple explanations for why the incorrect build of some software might appear in a software product. Anyway I don't believe it's correct to say Company B that "put malware in" their app is the same as Company C that operates an above board CA. And I'm beginning to question whether this software the opportunistic researches found is actually even malware in the first place.
> I could be mistaken, but TrustCor CA is company C. TrustCor (company B) is where rouge contractor allegedly added "malware", or in industry parlance, analytics software, to a product as part of work to instrument the app that never shipped publicly and thus never harmed users.
> Anyway I don't believe it's correct to say Company B that "put malware in" their app is the same as Company C that operates an above board CA.
You are mistaken. There is no Company C.
The company that put the malware in their app is MsgSafe.io, a "secure email" provider that advertises E2EE but doesn't actually provide E2EE. MsgSafe is owned by TrustCor. Again, this is something that Rachel readily and repeatedly admits in the email thread, for example in her 18 November email:
> Also, I will use "our company" when speaking of TrustCor (the CA operator) and MsgSafe (the email service).
MsgSafe may technically be a different company from Trustcor in the same sense that Google and Alphabet are technically different companies, but Rachel considers them both together to be "our company."
TrustCor/MsgSafe, Rachel's "our company," is Company A.
Company B is Measurement Systems. Measurement Systems is the company that provides the malware to app developers, not the company that put the malware in in their app. As quoted in my previous post, Rachel admits that TrustCor and Measurement Systems had the same investors. According to public records they still have the same investors. Rachel claims that the previous owners have since divested, but (1) this is not reflected by public records and (2) she is unable or unwilling to provide any documentation of it. Also as quoted in my previous post, Rachel admits that TrustCor/MsgSafe's app contained Measurement Systems' SDK.
> to a product as part of work to instrument the app that never shipped publicly and thus never harmed users.
This is false. The app, although in "beta," was available on the Play Store and linked from MsgSafe.io as well as publicly advertised from the MsgSafe.io twitter account.
> Furthermore, this entire thing is predicated on an allegation that because some piece of analytics malware appears unobfuscated in their app but obfuscated in others, they must have exclusive access to the source and therefore must be the authors. That's.. quite the leap. I can think of many other simple explanations for why the incorrect build of some software might appear in a software product.
No, it's not. There are various other pieces of evidence tying TrustCor/MsgSafe to Measurement Systems, including domain registrations and common investors.
> And I'm beginning to question whether this software the opportunistic researches found is actually even malware in the first place.
This, now, is truly absurd. The Measurement Systems' malware SDK captured and uploaded information including wifi router SSID and MAC, the phone number and email address associated with the device it's running on, the device IMEI, clipboard contents, and GPS locations [1]. There is no good-faith argument that can be made against it being malware.
I've seen many analytics frameworks that try to capture whatever device identifiers they can get their hands on. I bet half the apps on your phone use one. And this has been an accepted practice in the industry for years. Why do you think Apple and browsers have been slowly removing access to these IDs? I tend to agree that such data collection is unnecessary and unwanted, but if a product or service is putting that shit in their software, and they call it out in their privacy policy and users consent to providing that information, then I don't see the legal problem. Though I certainly wish we would make laws that disallow such practices.
The problem here with Rachel behaviour is not if they are guilty or not. The problem here is that there is an expectation that a CA have a certain standard of behaviour and ability to handle... well adversarial situations. Because that is what a CA has to do! They are the beholder of our trust.
By behaving like they do and more particularly not providing any of the proof asked in the process, Trust is broken. They do not demonstrate they have their shit together, do not demonstrate they understand the process nor the problem, and in general shows they are not equipped in term of knowledge and skills to be a CA.
That they are guilty or not do not matter anymore at this point. They have failed at a more basic level of being a CA. They cannot do the things we expect a CA to do. Them being breached or using their power for bad things do not even matter anymore.
In civil courts, the burden of proof lies with the accuser, usually. Because this is what happens if you let public opinion rule. It’s unfair to the accused and rarely ends in their favor, regardless of innocence or guilt, as you so clearly put it.
Being a root CA is a privilege, not a right. A root CA has enormous power over the whole internet, so they must prove that they are absolutely trustworthy beyond any doubt.
In civil courts the burden is the preponderance of evidence. And trust is a higher standard yet where the burden is on those who want the benefits of being trusted.
Part of a CAs job is to manage public opinion because their job is to maintain trust in the CA system. If they cannot instill trust then they should not be a CA.
I read the entire thread too and "I'm disappointed" that you portrayed (in the root of this comment) and unfair picture of "being harassed by a mob" while then retreating to "I don't know how this works".
It's basically a series of vague appeals to emotion.
> It really feels like a perversely inappropriate forum for discussing a complicated issue like that.
and
> Rachel was a victim of a sloppy but effective smear campaign.
The facts are neatly summarized in the thread as to why the CA was yanked. It's dispassionate (Except the MS flame) and to the point.
Your defense is almost as poor as the slight accusation of the researcher as misogynistic.
> I am probably in the minority here, but I can’t help but feel like Rachel was a victim of a sloppy but effective smear campaign.
I usually have sympathy for privacy-friendly services being abused by bad actors, and i certainly have sympathy for anyone being impersonated by State-sponsored APT for nefarious purposes. However, after reading through the thread, this does not seem to match reality.
It took just a few email back and forth for TrustCor to change their statement from "we know nothing about these people" to "we used to have common investors", while placing the blame on a single recently-deceased individual... which still does not explain how a malicious data exfiltration (malware) SDK ended up in a beta product of theirs (a question they silently skimmed over), or why they pretend not to know why most of their legal infrastructure in place is deeply tied to this malicious actor.
Without commenting on the CA operations of TrustCor (and its lack of transparency), or the seemingly-broken security promises of the MsgSafe service, it seems relevant for the CA/B forum that TrustCor is obviously arguing in bad faith and trying to dissimulate ties to a now-well-known APT.
You would certainly expect most CAs to operate more transparently, to be registered where they actually operate, and to disclose where their hardware is located, especially when this location exposes them to NSL-style laws. Operating out of a mailbox in a tax heaven, for a company based in Canada, with machines in the USA is already very sketchy. TrustCor's responses in the mailing lists in my humble opinion clearly outlines that they are bad faith (if not entirely malicious) and should be treated accordingly by browser vendors.
I understand that Rachel is now in a bad position and feels smeared. And maybe she is not the person responsible for the malicious setup/activities of the entire company (maybe she's even unaware), but that's what you get for being the public face of a rather-secretive malicious actor.
I'm not seeing where she's provided answers to the questions that really matter. All she's done is to talk in a patronizing manner to the CA members regarding their inability to understand corporate structures, as well as never answering how or why a MITM companies' SDK ended up being embedded in their app.
Further, even in times of stress, lashing out isn't the best decision. If I were interrogated by a cop and I called them a bunch of names, I would attract additional charges, on top of being suspected of commiting the crime that I've been accused of.
To be fair they do say (without proof but that can be hard to provide) that the spyware was put there by a contract developer that was not authorized to add 3rd party tools but did anyway.
That being said, given how extremely evasive they were and the lack of any tangible proof, I don't think it is unreasonable to doubt this explanation (how come you think a contract dev implementing malware isn't grounds for a lawsuit, shouldn't that be an open and shut case?)
I have to say that even if the “rogue developer” story is accurate, the reaction to it is a little underwhelming. “Sure, our supposed E2EE software did some crazy sketchy shit including proxying trivially-decryptable network packets to god-knows-where through our servers, but, uh, that guy doesn’t work here anymore” is supposed to be satisfying?
I'm advising people for a long time now to make screenshots of emails etc. - at least have everything in writing, don't act on phone calls if you feel things are "in a grayzone" (happens often in startups).
Its pretty clear that a dev with a second degree in law still wouldn't have been able to determine whether companies that shared most of the same infrastructure and listed corporate officers were 3rd parties in the context of software, without grilling someone who may or may not be a Trustcor executive, may or may not be the past founder and may or may not be dead, where such a death neither implies nor dismisses the possibility that they are still running the company.
I wondered the same about those "audits". When we had to introduce SOX and had compliance audits, every moving of my small finger needed to be reviewed and documented and have a trail to a senior manager approving the move of my small finger.
She didn't lash out, everyone else did? She made it very clear numerous times that she didn't think the forum appropriate for discussion of speculation.
This response by Rachel McPherson from Trustcor definitely comes as lashing out"
> Apparently it may also come as a surprise to some readers and the researchers themselves that other root program members are in fact international governments, and some are also defense companies, or companies who are wholly-owned by defense companies and/or state-owned enterprises, meaning "businesses" that are completely owned or controlled by governments. Further, some of those governments are not free/democratic and in fact some have tragic modern histories of basic human rights violations. We are none of those things and our company does not identify with those values. Given this point above, why of all potential targets are these researchers interested in TrustCor? They could go after countries with human rights violations that have placed a CA in the program.
Not only "SDK ended up being embedded in their app" but why they had an unobfuscated version when everyone else has only an obfuscated version of that SDK.
One one hand i agree that being defensive is warranted given the accusation, especially considering most of the claims appear to be related to an entirely different product than their CA business. The bit where someone goes to great length to demonstrate how they email isn't e2ee is especially jarring.
That said, it appears mozilla's decision is founded not on their response in the thread, but on the fact that Trustcor basically is a root CA for the sole reason that they provide a useful service in the exact product being shown as untrustworthy. If the only reason is their email service, and their email service can't hold up to scrutiny (including promising e2ee and not actually providing it, and having poor development security practices) then why do they have root CA power in 99% of client devices? in my opinion, their email service didn't warrant such inclusion to begin with, even if the service was sound, and that's not accounting for their weird corporate ties (which may be legitimate, tbf).
Rachel stated that TrustCor CA has many customers that are not the email service. Because this isn’t a court, discovery won’t tell us if that is, in fact, true or not. But if it is, then it seems completely normal.
Moz's decision isn't based on the costumer base, it's based on "what does you being a root CA provide?". Their email service was listed as the primary reason they should be a root CA (regardless of other eventual costumers) and if the email service isn't sound then their primary reason is moot.
That’s fair and I do agree with that line of reasoning. You don’t need a CA to run a mail service anymore. Perhaps we should audit all of the CA value statements and weed out dated entities…
Although m.d.s.policy contributors might represent other uses of the Web PKI, the browser vendors (or at least Microsoft, Mozilla and Google) are primarily interested in web browsers, and it's unlikely that "But look at these SQL Servers" for example is a compelling objection to measures whose primary goal is to secure the web.
And in practice, on the web you're baking SCTs into the certificates (technically sophisticated customers might buy certificates with no SCTs because they know what they're doing, but that's a speciality product, lotta people buy gasoline every day, but not too many need barrels of crude oil, if you claim 2 million distinct customers served daily but then say they're all buying crude oil I just don't believe you).
To get working SCTs the (pre-)certificate needs to be logged at one of a few dozen trusted Certificate Transparency logs. Which means there's a public record of every such certificate, who issued it and when it was logged.
While this is indeed not a court and doesn't have "Discovery" the CA agreements do require the CA to provide Mozilla and other vendors with complete records of certificates they're interested in, these days that is often provided in the form of crt.sh links because hey, the (pre-)certificates† were in the logs anyway, but it's compliant to provide the data as ZIP files or whatever -- if there is such data and you have it.
So, no, independent researchers can get a pretty good idea by just inspecting a public log view, and the browsers can insist on getting the exact answer if they want it, unless of course TrustCor doesn't care about being distrusted.
† You aren't required to log the certificate as well as a pre-certificate but in many cases CAs do that too. Modern rules are clear that the existence of the (non-working) pre-certificate is assumed to imply the existence of the corresponding certificate even if you claim the certificate was never actually issued.
I agree there was a lot of mud slinging in that thread, but this is the key bit from Mozilla's response, supported by statements which Trustcor haven't disagreed with:
> Certificate Authorities have highly trusted roles in the internet ecosystem and it is unacceptable for a CA to be closely tied, through ownership and operation, to a company engaged in the distribution of malware. Trustcor’s responses via their Vice President of CA operations further substantiates the factual basis for Mozilla’s concerns.
It's not some other company, its the same owners and operators doing malware under one name and running a CA under another.
The most shocking aspect of this is how it reveals that Mozilla, Microsoft and Google do zero due diligence before adding a new root CA. Relying on independent researchers to find problems.
Is that still the case? Or is it just new root CAs get the appropriate amount of scrutiny, but a lot of existing CAs have been effectively grandfathered in because they were added two decades ago when folks weren't as diligent?
EDIT: elsewhere in the thread someone linked the bugzilla request for TrustCor to be added. I had assumed that was a long time ago, but it's "only" 7 years ago.
The problem isn't a lack of "Response", its a lack of "answers".
When someone asks you what is the shared ownership of these two entities, and you tell a story about your summer in lancaster working in the mail room of one of the entities, it isn't useful or related to the question.
That’s not what happened is it? She directly answered and said they were funded at one point historically by the same investment group, but that relationship had since dissolved. For all we know she had been working incredibly hard to keep the business above board despite financial incentives to be corrupt and unethical individuals willing to pursue them. If you ask most founders of companies who’s on their cap table as an outside person you generally won’t get a strait answer, especially not in a public forum. I take issue with the expectation that answers are required in the first place without some substantial reason why they’re relevant.
Its hard to tell from your repeated assertions that do not match what actually occurred in the thread. It feels like you skimmed, and have some bias attempting to give the benefit of doubt to what you see as the embattled party in some kind of unjust lynching instead of legitimate questions of entire shared corporate officer structures, and clearly shared dev teams [that the representative blatantly attempted to claim was out of line while avoiding the actual part of that evidence that was damning, that the dev somehow had access to the raw source of the library in question that very strongly indicates that it was their library and not the rouge devs'] of a known malware entity and a ROOT level CA.
If you are just trying to play devils advocate in what you feel is somehow a mob action instead of what is probably extremely conservative actions by some of the biggest and most legally careful entities in the planet then honestly, I ask why and for what purpose?
"Apart from our CA work, we also bought an aging email service with a few customers, and invested substantially in developing it into a flagship email security product line compatible with global email security standards including both S/MIME and GPG. Then, over the last few years we added unique features our customers demanded. Today it stands alone as a valuable email service enjoyed by millions around the world as an alternative to other popular web based secure email providers."
and unsubstantiated ad hominem attacks
"us recently by a biased group of security researchers "
Pushing the irrelevant BETA narrative
"TrustCor has never released a non-beta, public version of any mobile phone software/version and in fact the only mobile-friendly configuration we support is direct-from-browser mobile access that leverages the popular industry-standard framework for delivering near-app-quality mobile experiences using web browsing on mobile devices. You don’t need any downloaded software to use it whatsoever."
(where the still unanswered question is: Where did TrustCor get the source code of the spyware no-one else has?)
"Perhaps they are working with the US defense community, [...]"
Pushing unsubstantiated conspiracy theories that add nothing to the case
and on and on and on the same as before nothing new in that email.
I had only read up to Joel's self-correction (last message of Nov 18, 2022) and my impression was that the TrustCor rep was being evasive and kept changing her story. For example, the CA doesn't operate out of Arizona, that's just where they keep a bit of equipment -- except then it does. And not being able or willing to answer some very simple questions.
That's on top of the technical and legal evidence of the companies basically being the same company.
The wall-of-text could be a result of unfair accusation, sure. No doubt she's under a lot of stress either way! And I did think that some of the accusations in the thread were a bit harsh, sarcastic, or otherwise inappropriate. But the facts stand.
A pinned comment defending the CA with shady government ties. Hmm. Anyway, why don't you do us all a favour and quote some of those answers which you think perfectly clarify things, instead of just telling people they are reading it wrong.
The part that really drives me up the wall is that the same groups of people that drive these sorts of situations will turn right back around and complain that entities knee-jerk react with a salvo of smear campaigns, corporate BS and other dirty tricks in the face of even the most mild and genuine criticism as if they themselves are not the driving factor behind creating a situation in which that is not the "safe" response.
That's very weird take - the non-answers provided were as slimy as possible, hiding behind word soup and weird legal claims. I don't see anyone innocent arguing in that way.
Instead of "tying to set guidelines" you can try to honestly answer the questions.
I think what's missing here is that TrustCor has no "right" to be included as a root CA. The question to ask is: is it in the interest of the browser's users. TrustCor is some (smallish) risk for basically no gain, so the conclusion is is to remove it as a CA.
This sucks for TrustCor. They may have done nothing wrong. That doesn't matter.
"if our concerns have not been resolved by November 22 and further investigation and discussion is still needed, then set “Distrust for TLS After Date” and “Distrust for S/MIME After Date” to November 29, 2022"
"Certificate Authorities have highly trusted roles in the internet ecosystem and it is unacceptable for a CA to be closely tied, through ownership and operation, to a company engaged in the distribution of malware. Trustcor’s responses via their Vice President of CA operations further substantiates the factual basis for Mozilla’s concerns.
[...]
Our assessment is that the concerns about TrustCor have been substantiated and the risks of TrustCor’s continued membership in Mozilla’s Root Program outweighs the benefits to end users.
In line with our earlier communication, we intend to take the following actions:
1. Set “Distrust for TLS After Date” and “Distrust for S/MIME After Date” to November 30, 2022, for the 3 TrustCor root certificates (TrustCor RootCert CA-1, TrustCor ECA-1, TrustCor RootCert CA-2) that are currently included in Mozilla’s root store.
2. Remove those root certificates from Mozilla’s root store after the existing end-entity TLS certificates have expired."
>Trustcor’s responses via their Vice President of CA operations further substantiates the factual basis for Mozilla’s concerns.
So Mozilla is flatly stating that Rachel's Gish galloping bullshit responses in the discussion were a significant part of the reason they don't trust her. She should have just followed a good lawyer's advice and kept her mouth shut, because she made her own problems much worse with her own words.
> The discussion thus far is appreciated and has been both informative and constructive. My post on November 8 indicated that if our concerns have not been resolved by today (November 22) and further investigation and discussion is still needed, that we would set the “Distrust for TLS After Date” and “Distrust for S/MIME After Date” to November 29, 2022, for the 3 TrustCor root certificates. However, we’d like to allow more time for any additional dialogue or external developments to transpire prior to sharing our intended course of action. We will continue our assessment and share out necessary next steps on Wednesday, November 30.
> The most recent attacks against us involved the creation of companies in the United States very similarly named to those of our shareholders (which have since been dissolved). We believe those may have been used in an effort to do something cyber-physical however our only evidence of that has been an attempt to gather more information about our company through insurance inquiries by these companies (which were caught and stopped).
This particular claim is the most interesting. If true, it would be a big story, and if false, it seems like it would open up fraud charges against Rachel.
You are aware that Truth is an absolute defense against libel, right? TrustCor would have to prove that WaPo acted with actual malice to get anywhere with a suit.
I like how periodically we briefly get reminded that PGP's web of trust really is the superior system but we forget about it in a few minutes and move on.
PGP's ceremony is no solution for anything. A Signal-like experience (TOFU, key verification if you wish, alerts when keys change) would be superior, if people were trained more (and recall how hard is was to tell people to look for the green padlock). Autocrypt is an attempt (with little uptake).
Certificates not only about encryption, but also about authority: am I really connected to my bank? PGP only solves that if I already have verified keys, which is the hardest part left unsolved.
Unfortunately, proprietary two factor apps seems to be the way this is increasingly solved. Even further removed from a superior system (but, apparently, practical).
Most of the whole PGP system is inadequate for real use, but it points towards a model and interface which is: most people for most encryption simply want to know the answer to "is this my bank as my government/insurance company etc. believes it is?" and that sort of key would essentially be distributed by consensus.
But the standard we've got is, once you're "in" to the CA root store, technically you can issue certificates for anything in any context, and the way we interact with them is to simply trust them uniformly.
What we need is a system which let's us easily contextualize the actual trust problem we're solving with a connection: i.e. "I'm contacting a bank in <country>, so I want to know that the government of that country thinks its a bank, (maybe through the reserve bank of that country which expresses trust as to that identity". Chains of trust which make sense for the relationship.
As it is, if I visit say - pm.gov.au and check the certificate I get that it was issued by GlobalSign. Who are they? Well, they're in the Root CA store which is why they're involved because that was the only requirement. But what I want to actually know is "Am I talking to the Australian government and at what level?"
The problem isn't made easier by there being two completely distinct questions to be answered:
- connection security: are the crypto credentials being used for signing e-mails or encrypting web traffic belonging to the entity in question? This should have been solved by putting the fingerprints into DNS so that clients can validate it on their own instead of having to trust a CA. DNSSEC and certificate pinning would have been the answer, but both are incredibly complex to set up and littered with failure scenarios that may be very difficult to recover from, but in the end it got "solved" by LetsEncrypt/ACME where the CA acts as a proxy. For e-mail, it got solved better by DKIM, but that's only applicable to the scenario "an email server wishes to check if an email it received from example.com actually originates from example.com", but not for "an email client wishes to send an encrypted email to foo@example.com and needs the public key for this mailbox".
- connection authenticity: is the server a client is talking to (e.g. bank.com) actually belonging to the legal entity the user expects it to be? That one is what CAs were originally designed to verify and where a much larger amount of trust is placed on the CAs doing their job correctly. One idea to do that was SSL EV certificates, but for whatever reason these fell out of favor - and right now, there is no replacement at all for this use case.
Additionally, the situation is made even more complex by legal or compliance requirements, e.g. banks who have to record virtually everything their employees do. For that, they have to break HTTPS by providing their own root certificate.
Except the questions are not distinct as long as you want to protect against MITM attacks. Having a secure connection is meaningless if you have no way of telling that its secure up until whoever you intend to talk to.
And if you do trust the connection to be free from active attackers then you don't need certificates at all, e.g. diffie helman key exchange would be enough to create a shared secret that in the presence of a passive adversary that can only observe.
> PGP only solves that if I already have verified keys, which is the hardest part left unsolved.
But you've opened your bank account in person. When you got the token to login to the bank website they might as well ask you to sign their certificate since you are there.
Most European countries run proper citizen registries that allow for secure verification of IDs and online opening of bank accounts. Many banks don't require visits anymore.
All the same, two factor devices (you can mail them too) are phased out across the blad in favor of mobile apps. Which is nog great.
mobile authentication is actually in violation of a EU directive.
My bank linked that very directive when they retired the physical token in favour of using the phone, saying the bad EU was forcing them to do this money saving step.
I couldn't believe it was true, I read the actual text of the directive and it was in fact not true.
Are you sure mobile app-based authentication is in violation and not just SMS-based authentication? Because my bank is using that as the primary method too with the card-based solution costing extra (fixed cost for the reader/tan generator and not monthly cost for the otherwise useless to me card needed for that).
Replying in case someone still reads this: in the PSD2 text I could not find anything on the prohibition of mobile auth. Note that I do not mean SMS texts, but using an Android/iOS app as two-factor.
Many force you to install their own app instead of using a generic one. Which is a problem.
Also google's app doesn't easily let you export and backup the seeds. So you either remember to do that initially or breaking your phone == losing access to everything.
> Many force you to install their own app instead of using a generic one. Which is a problem.
I agree that having to use a proprietary app is not great, but TOTP is vulnerable to MITM attacks because the tokens are not restricted to a specific action but only to a time slice. For many use cases that is not a big problem, but for a bank account I'd want more security. Of course being able to do your banking in the same app or a slightly different one on the same device kind of defeats the purpose.
With certificate transparency, there has been almost no invalid certificates found, and anyone who produces one is immediately struck off.
I can't really imagine how web of trust would let me (for example) securely connect to hellbunny.com, a website I have no direct connection to,.and isn't that famous.
The "trusted" party sometimes fails spectacularly it seems.
> hellbunny.com, a website I have no direct connection to,.and isn't that famous.
And what do you know about that certificate? That some root CA signed it. Do they follow a proper process?
I don't care about random websites. For those letsencrypt is fine. But for banking or paying taxes I want some more checks done. And they aren't provided.
> And what do you know about that certificate? That some root CA signed it. Do they follow a proper process?
The purpose of m.d.s.policy, the discussion group where the decision to distrust Trustcor was made, is to oversee these root CAs. As part of that, we require them to use at least one of the Ten Blessed Methods (there aren't actually currently ten of them) to decide whether a subscriber is entitled to certificates for particular DNS names.
You can read in the CAB BRs https://cabforum.org/baseline-requirements-documents/ what the currently allowed Blessed Methods are in section 3.2.2.4 Validation of Domain Authorization or Control -- each method is numbered e.g. 3.2.2.4.19 is the most often used Let's Encrypt (ACME standard) web site authentication.
You can also read in the CA's own documentation how they claim to implement one or more of the Blessed Methods, for example Let's Encrypt offer 3.2.2.4.19 and explain that, but they don't offer say 3.2.2.14 which is sending out emails with a magic random number in them to a domain contact's email address.
> Why are root CA authorities holding on to private keys and sending them around?
They aren't. In the story you link Trustico are a reseller. Once Trustico revealed that they had these keys which they shouldn't have, all the certificates were invalidated because they're worthless, the issuing CA - DigiCert - did exactly what we want them to do.
If the corner store near me sells cans of Coke which they have poisoned, that's not a problem with Coca-Cola, it's problem with that local store. The local cops should get involved, and yes as happened here, the company whose product reputation they harmed should be angry about that and cut ties to them, no more Coke branded fridge if the store owners somehow stay in business (and indeed in my hypothetical if they avoid jail).
It really isn't, though. It's great in theory but breaks down in practice because the average user has neither the patience nor knowledge to manually deal with that stuff, nor should we expect them to.
It isn't superior because it isn't easy to use. It's only superior on the "secure" side. It would be nice if that concept was re-implemented in an actually superior product, starting with "will the average user be able to use it in their daily communication?"
The OpenPGP standard trust stuff is certainly represents a more flexible system (wildcards match on a regular expression) and at this point is a simpler and more straightforward system. It ultimately solves more or less the same problem.
TLS links a key fingerprint (ultimately the identity in any cryptographic trust system) to a host name. PGP links the key fingerprint to a name and email address (although the standard does not prevent you from linking, say, your phone number as well). So in either case we link the fingerprint to a more convenient alphanumeric string. That's it, that's all we are doing here. The whole trick is in what you think that alphanumeric string represents.
I sometimes fantasize about a world where the head office of a bank has a giant QR code literally chiselled into the stone of the building representing the key fingerprint of their root identity. That way anyone that walks by can verify all the bank's identities down to the individual employee simply by snapping a pic with their phone. Part of the responsibility of someone working there would be to check the QR code every morning to ensure that no one had come in the night with a jack hammer and concrete...
If the certificate says "Company AB" the CA is required to check that's true, there's a set procedure to do so. If you ask Let's Encrypt for a certificate saying "Company AB" they won't give it to you, because they don't follow such procedures and so they know they can't issue such certificates.
But, this is not very useful because:
* Consumers generally have no idea who "Company AB" are. You wanted funny-cat-gifs.example not "VXK Enterprise LLC of New Jersey" who might happen to run funny-cat-gifs.example. Lots of famous brands are actually offered by companies with unrelated names, so then you're asking the CA to vouch for the brand name, which is an extra layer of indirection...
* Jurisdiction is a thing. Maybe I trust Greasy Geoff's Cider And Pork, but alas I was thinking of the Kansas Greasy Geoff's, not the one from Missouri.
* Machines can't do anything with this so it's only useful in the rare case a human spent time thinking about it, whereas the DNS name check is something the machine cares about for every single HTTP transaction, of which often several per second happen.
As others have pointed out, CAs are 100% superflous to the existing DNS registrar and delegation system. All CAs do is verify that you "own" a DNS domain, which is precisely what DNS registration sets up in the first place.
Registrars should be Root CAs handing out subordinate CA certificates with every domain they issue, scoped to that DNS domain.
This will never happen, because companies like Verisign have billion-dollar vested interests in it not happening.
Technically it makes perfect sense, but the leeches collecting rent on the Internet don't want to let go.
DNS registries already hand out signing certificates, just not for TLS certificates but for DNSSEC. DANE bridges the gap. It works today (*).
(*) In supporting clients, conditions may apply.
I'm not sure what kind of pull big cert has that could allow them to stall DANE adoption. Sure, VeriSign acts as both a CA and a registry for the big domains - but they don't own those domains.
TRUSTCOR employee NOT having authorization to sign root certificate did it and got the auditor in trouble. I am trying to get the CFO to explain to him YOU cannot let his secretary issue certificates with no EXPERIENCE.
Or anyone with a couple brain cells to rub together. I mean, sure, they might not be related and this is all a huge misunderstanding, but CAs should be trusted beyond any doubt and Trustcor isn't even close.
My best attempt at a TL;DR:
TrustCor had some connections to an app in which the android version contained spyware. They claim it was placed by a malicious previous employee, no name given. But a recently deceased key founders son was heavily involved with the company that made spyware. I think TrustCor's claim is that the son was only an investor in both firms and therefore had no direct involvement. This seems to not be the case, he is listed as "Key Principal" in a directory-entry about the spyware-company and the person is also mention as an employee of TrustCor in the emails.
Meanwhile the EU is considering legislation that would require all browsers to recognize and accept any company that pays a fee, irrespective of how clearly untrustworthy that company is.
So in the future it could be illegal in the EU for a browser to drop a CA, even when they’re caught doing objectively fraudulent acts.
All because a bunch of MEPs are presumably being paid off by CAs who want to get legislatively mandated rent payments
> irrespective of how clearly untrustworthy that company is.
Isn't that a bit of an exaggeration? Surely what the EU is proposing is that browsers have to accept just those companies which pay the necessary fee and which some EU body declares to be trustworthy.
You're right, though, that this still adds to the attack surface, because now you have to trust not just your browser vendor, and all the CAs that they trust, but also this EU body and all the CAs that they trust.
That EFF page is really weird - English text, but right-aligned, with question marks to the left of the text, as if the text had been translated from a RTL language (Farsi perhaps given the /fa/ in the link?)
From the looks of it the EU wants to push out a wide spread identity management system, which is fine.
It's unclear to me after a brief perusal why they can't use a normal root certificate, or create their own along a letsencrypt line (indeed that would be a great benefit having another widespread free ACME powered certificate authority, one funded by the taxpayer), with the same protections and transparency as LE.
>That EFF page is really weird - English text, but right-aligned, with question marks to the left of the text, as if the text had been translated from a RTL language (Farsi perhaps given the /fa/ in the link?)
It's just a Farsi link to an English article that's automatically applying RTL styling, the original version of the page in English looks fine.
> or any say over the member states' criminal legislation? That Commission?
One of the jobs of the commission is deciding whether member states are in line with their treaty obligations, that includes any laws passed by said member states.
> At least that is the only thing that keeps the EU Commission from going full police state.
Such an extraordinary claim.
Are you able to provide any evidence that supports your claim that at any time the EU Commission tried to go "full police state", and that courts kept that from happening?
> illegal in the EU for a browser to drop a CA, even when they’re caught doing objectively fraudulent acts.
Right now Mozilla, Microsoft and many other privately owned root certificate mainterns control whether or not CA are trust worthy. I actually see your statement the other way around, it would be illegal for browser to support a CA when the EU deems it as untrustworthy.
Browser vendors could solve the issue by distributing two editions of their browsers, one only trusting certificates accepted by the EU and one rejecting all the certificates in the list by the EU.
You don’t pay a browser to get your roots included.
You file a bug report, and provide the supporting documentation and audits. If the relevant root program considers your supporting documentation sufficient, they include you.
You cannot pay a root program to include your root certa, and any CA that was found to have done so would likely find other programs considering that to be a red flag.
Three of the major trust stores are also OS vendors. Microsoft, Apple and Google.
Each of those three actually just has a rather opaque "send us an email and we'll talk about it" type process. I don't think there's any implication that you can or should pay them money (indeed Microsoft is clear that there is "No Fee" for this), clearly none of these organisations is desperate for cash. But it's actually unclear what you can do besides maybe send email and hope.
Mozilla's process is as you state to file a Bugzilla bug. You go in a big queue and there's eventually a public discussion on m.d.s.policy once you get to the front of the queue.
Now, in theory there is no relationship between Mozilla's transparent process and say, Apple or Microsoft. Indeed even for Google there's only a rather thin relationship, the final decision is theirs but they do say you need to talk to Mozilla.
And when they find evidence of a CA misbehaving they revoke them, as here, they don’t go to the EU and say “hey we think you need to re audit this company” and then wait a few months/years while the business continues to pay the EU to remain in the root stores.
According to the article, TrustCor doesn’t even own a physical mailbox. I don’t want to imagine how and where they might be keeping their root CA private key material.
> I don’t want to imagine how and where they might be keeping their root CA private key material.
The handling of key material is supposed to be checked as a part of the (required) yearly audits, which they have passed[1,2] (though the single auditor they’ve always used “does not audit any other publicly-trusted CAs”[3]). The links are in the Common CA Database (CCADB) [4], but it seems really hard to find a good publicly-accessible report page (I still haven’t found the older audits, for example).
ETA: For TrustCor specifically, Kathleen Wilson (responsible for the Mozilla root store) has collected the audit reports on Bugzilla[5].
On HSMs; purpose built hardware that tries to make it physically and programmatically impossible to extract the private key material. They're generated there during a key ceremony and never leave a HSM. They also generally require like 2 or 3 officers of the compnay with smart cards and personal PINs to actually do anything using the root CA (it only signs an intermediate cert like once in a blue moon or something).
I'm pretty sure the CA/B Forum mandates all CA private keys to remain on HSMs (checked through audits).
As far as I understand it, there are basically five organizations involved:
1. TrustCor operates "TrustCor CA" and "MsgSafe". TrustCor claims these are completely separate lines of business, but does not dispute that it owns and controls both of them.
2. MsgSafe incorporated a spyware SDK from Measurement Systems into their Android app. TrustCor claims this was done by a rogue contractor (and thus TrustCor had no control over it), and they also claim it wasn't a security breach (implying TrustCor allowed the contractor to do it). Also, MsgSafe claims to offer end-to-end encryption while demonstrably not offering end-to-end encryption.
3. Measurement Systems produced a tracking SDK that is now considered spyware. It has historically shared many high-level employees with TrustCor, including (allegedly) having the same CFO and CEO.
4. Vostrom aka Packet Forensics sells spy equipment that claims to be able to break HTTPS, like you'd be able to do if you had access to a root certificate. They have nebulous ties to both TrustCor and Measurement Systems.
5. TrustCor CA is no longer trusted because of the above, and because its VP answered questions about it by saying her lawyer advised her not to comment.
Another small correction: In order to MITM https, you need access to either a trusted root certificate and key, or a key & certificate with the CA bit set, signed by a valid root CA.
Assuming you haven't somehow stolen the real site owner's private key, you would need to produce a certificate for that DNS name, signed with a key you do have.
Which is something you could in principle do if you are a trusted root CA. But, this creates a smoking gun. The bogus certificate is a public document, you're always giving it to the client, and for Chrome, Safari, Chromium Edge you are also obliged to publicly log the certificate, where everybody can see it forever, in order to have an SCT (proof of logging) which those browser insist on seeing.
Modern rules require a root CA to disclose any intermediate CAs which are created, even if not currently in use (e.g. because still being tested) and which could issue trusted certificates unless the certificate for the intermediate is technically constrained (which is complicated, but a general purpose CA is not technically constrained for the purpose of this definition)
In practice, most outfits offering "MITM" type capabilities are for corporate environments, education, that sort of thing, where you can say "All employers/ students/ whatever shall trust our our private CA FOO" and then you can MITM using the trusted FOO CA. So this doesn't interact with the Web PKI overseen by m.d.s.policy at all. If you don't want to get MITM'd don't trust some sketchy private CA.
Maybe a sufficiently crafty vendor of MITM equipment could prevent MITM site certificates that are signed by evil intermediate CAs from appearing in CT logs by filtering access to those CT logs. But it is a risky proposition for the vendor, as you've said.
A company with such an important role on the internet should be extremely transparent, there’s no benefit to the end-users of giving such a company such amount of privacy.
Since there was substantial evidence that these were one and the same company. The CA had no substantial public certificate issuing program regardless so kicking it out was an easy decision either way (and begs the question what exactly it is they were doing to justify the multi million dollar operating costs of a root CA).
It really doesn’t matter if it was circumstantial or not. When presented with such evidence, the TrustCor rep was extremely evasive. This isn’t a court of law; if a Root CA can’t be up-front about explaining evidence that calls their trust into question, then they cannot be trusted and deserve to have their certificates yanked.
Yeah I personally feel for the rep, they were indeed under attack. But there is no going around it, a company that advertised a product as being e2e encrypted when it really was not, having multiple levels of ties with the CA was enough to deem the CA as unfit.
I agree with this conclusion because the CA ecosystem is a fragile one if not governed strictly, since the risks are of the highest concern to the general use of the Internet.
> circumstantial evidence, but nothing that was definitive
That's not how things work. The role of a Certificate Authority is to act as a trusted third party. If that third party is unable or unwilling to demonstrate that they are trustworthy then naturally they can't be expected to assume that role.
Looks to me like Trustcor’s subsidiary company MsgSafe made an app containing spyware. In addition, that spyware funneled data to a hardcoded url on a MsgSafe server which the Trustcor rep openly admits was only protected by a self-signed cert of unknown origin and was forwarded to unknown destinations as raw tcp packets.
There is a lot of doubletalk in the thread that is supposed to somehow lead us to believe that TrustCor CA and MsgSafe are totally separate companies, despite lots of circumstantial evidence that they aren’t.
It also happens to appear that MsgSafe and the company that actually created the malware (Measurement Systems) might be closely related and/or the same, owing to a lot of the same names on corporate documents (many of which names are shared with Trustcor and/or Trustcor CA), plus the extremely suspicious fact that the malware in question was only ever distributed elsewhere in obfuscated form, yet somehow MsgSafe seems to have an unobfuscated copy built into their app.
It’s also extremely odd that despite all the protestations about Trustcor CA and MsgSafe being completely unrelated, the Trustcor CA director of business operations has intimate knowledge of the source control, server configurations, and VM snapshots of the server that the traffic was being proxied through at MagSafe.
> There is a lot of doubletalk in the thread that is supposed to somehow lead us to believe that TrustCor CA and MsgSafe are totally separate companies, despite lots of circumstantial evidence that they aren’t.
Instead, what they claim is that these two parts of the business are operated separately. As in, MsgSafe doesn't run on the same servers as the CA. So if a "rogue contractor" adds malware to MsgSafe and it goes undetected for several years, that shouldn't reflect badly on the CA side of the business at all.
(TrustCor was so evasive about this that they seem to have misled most of the people in this thread, though.)
> what they claim is that these two business are operated separately
except that the person doing all this claiming happens to be director of operations for both.
I concede that it isn’t impossible that they have a strong firewall between these two companies. But all the obfuscation, defensiveness, and easily refuted claims don’t really make anyone willing to swallow that story.
Given that, according to TrustCor's own statements, MsgSafe relies heavily on certificates issued by TrustCor, I don't think there's all that much separation. (There might be, like, a network firewall in between the two server fleets, but even that is doubtful.)
Yeah, reading through this whole saga, one of the things I wondered was whether MsgSafe might actually have the ability to get any certificate it wants from Trustcor CA through this special relationship. If it’s got that kind of permission, all the corporate governance separation in the world isn’t going to matter.
I think the concern is less about MsgSafe getting any certs it wants and more about how shoddy development practices (letting the third party malware SDK be incuded) and business ethics (false E2EE advertising and typo squatting) at MsgSafe reflect on TrustCor's ability to operate a CA given the shared management.
The other large concern is the large number of links between TrustCor and Packet Forensics, Measurement Systems and Volstrom Holdings. Specifically the history of shared ownership, corporate officers, and the inclusion of a the only known non-obfuscated copy of a Measurement Systems malware SDK by a "rogue contractor" for TrustCor.
As an aside, I though it was interesting that Rachel made a point of placing the blame on a contractor and then later admitted that they pay all their "employees" via 1099s.
Is this the first time a CA has been distrusted for reasons completely unrelated to the security or operations of the CA itself?
Reading through Kathleen's summary of the issue[1] that others linked elsewhere in this thread, the only statement relevant to the actual CA portion of TrustCor's business seems to be:
> There is no evidence of TrustCor mis-issuing TLS or SMIME certificates.
Even the original post says:
> Just to restate: I have no evidence that Trustcor has done anything wrong, and I have no evidence that Trustcor has been anything other than a diligent competent certificate authority.
So it seems like the sole basis for this action is TrustCor's mere affiliation with another company that does TLS interception? If that's true, it seems like a pretty significant departure from previous removals, which all (to my knowledge) involved some sort of action or inaction affecting the security of the CA certificate itself.
Is there anything in either Mozilla or Microsoft's root store policies which prohibit CAs from being affiliated with shady companies? Or does this just fall under the "at our sole discretion" clause?
Closest I've seen is when UAE's Darkmatter was revoked as a sub-CA. Darkmatter claimed their "Trust Services" business unit was separate from the business units accused of spying for the UAE government.
Yeah, now that you mention it I do remember that one. Though I think in that case there were some additional factors at play, like the fact Darkmatter wasn't actually a trusted root CA in the first place (only an intermediate CA, like you said), and had just got caught using insufficient entropy in their certificate serial numbers (which wasn't a huge deal in terms of impact, but was technically still against the baseline requirements).
> So it seems like the sole basis for this action is TrustCor's mere affiliation with another company that does TLS interception?
The CA representative was presented with the challenge of proving their CA should be trusted, and failed it. This isn't a case of "presumed innocent until proven guilty" as in a criminal trial, so looking at it through that lens isn't very helpful.
I think it is reasonable to conclude from Rachel's communications that TrustCor cannot be clearly identified as a trustworthy root CA, and thus they have been removed.
Maybe that's a reasonable standard, but if that's what they're using now it's still pretty noteworthy, since like I said I can't recall any instance of a CA being removed for that reason before.
In the past there's always been some sort of egregious security issue that calls into question the security of the CA certificate itself.
> So it seems like the sole basis for this action is TrustCor's mere affiliation with another company that does TLS interception? If that's true, it seems like a pretty significant departure from previous removals, which all (to my knowledge) involved some sort of action or inaction affecting the security of the CA certificate itself.
A single entity issuing TLS certs and also selling TLS MITM services is pretty obviously an enormous conflict of interest. If this isn’t spelled out explicitly anywhere, it should be.
https://groups.google.com/a/mozilla.org/g/dev-security-polic...?