I suspect that the definition of "Safe" in this context is that it has limited ability to mess with your computer. From what I have read, the application didn't violate the security of anyone's computer, it didn't need to!
So we need to be careful with how we interpret "Safe!"
In the 1990's we had Netscape with SSL and Microsoft with PCT. We (the world) really didn't need two protocols to do the same thing. So, we (the IETF) got the Microsoft folks and the Netscape folks to work together to come up with a merged protocol. This resulted in the birth of Transport Layer Security, aka TLS, which is what we use in browsers today...
Another tool to look at is vpncloud (https://github.com/dswd/vpncloud). It also builds a mesh network over UDP. Key setup is a bit easier, static keys are only used for authentication. Encryption keys are dynamically generated and replaced on a schedule.
I combine it with an ansible script to push out the (minimal) configuration to end nodes.
P.S. It is a Rust program, I compile it as a static binary, so my ansible script can push the binary out to any Linux distribution (that is x86_64) and it will run.
Another tool worth looking at is vpncloud (https://github.com/dswd/vpncloud). I used to use tinc, but switched to vpncloud 2 years ago.
In my use case, I have a modest number of nodes. Although nodes learn of other nodes from each other, I use ansible to keep each node's config updated.
I use vpncloud (and previously, tinc) between docker hosts. So, you have to be careful about interface MTU's inside of docker, particularly if you use containers based on Alpine.
A less permanent solution is to use versioned buckets with MFA Delete turned on. You can then cleanup versions if you need to by disabling MFA delete, which requires the MFA to do. So as long as your MFA device is not on-line, then if someone compromises your servers, they cannot disable MFA delete and cannot remove versioned objects.
It's actually worse. The new root (good I believe until 2038) uses the same key as the now expired certificate. It has to or it would not be possible to validate the certificates that were issued. And this new one is a root certificate installed in browsers!
What "should" happen is that no certificate should be issued with an expiration date later than the issuing certificate. Then as the issuing certificate gets closer to expiration, a new one, with a new key pair, should be created and this new certificate should sign subordinate certificates.
Sorry to reply to my own comment. But I want to clarify. Two certificates (at least) expired. The root named "AddTrust External CA Root" and a subordinate certificate with a subject of "USERTrust RSA Certification Authority." Both expired around the same time.
The "USERTrust RSA Certification Authority" certificate signed yet another layer of intermediate certificates.
The "USERTrust RSA Certification Authority" certificate was promoted to a self-signed certificate, now in the browser trust stores, using the same key pair as the original certificate that was signed by "AddTrust External CA Root." It has an expiration of 2038 (although that concept is a bit vague in a root certificate).
There's actually a third certificate for "USERTrust RSA Certification Authority", also using the same key pair, signed by a different root called "AAA Certificate Services". It looks like the intended replacement for the expiring one is this one, rather than the one where it's the root itself.
It is explicitly not a replacement, but some kind of legacy fallback that they don't want you to use, but exists for enterprise customers that absolutely can't get trust.
That's what my browser shows me too, but it's just because it's ignoring the cross-signed one that chains to AAA. The server is sending it, per InCommon's setup instructions.
The old TLS (versions 1.0, 1.1, 1.2) specifications said that the certificates supplied are to form a chain, starting from a leaf and leading back towards a root.
Pretty much all clients assume that once they can see a way to a root they trust they'll give up following the provided chain and trust that - but sadly not all of them, so "over-specifying" the chain can cause problems.
Modern clients tend to go further, they still assume the first certificate is a leaf, but all other certificates are just potential hints that might be helpful in working out an acceptable trust path. TLS 1.3 actually specifies that clients must tolerate certificates supplied on this basis rather than a strict "chain".
I'm actually surprised at the number of claimed clients which don't have vaguely modern trust stores but do understand SHA256.
> I'm actually surprised at the number of claimed clients which don't have vaguely modern trust stores but do understand SHA256.
All the clients were limited to SHA-1 have already been forced off https; CAs in the CA/Browser forum weren't permitted to issue SHA-1 certs valid past Jan 1 2017, and you had to have gotten those issued before Jan 1 2016. Browsers were showing warnings on SHA-1 certs depending on expiration throughout 2015, so you had to either put up with a warning (and the customer service burden thereof), ditch your old clients and go sha-2 only, segregate traffic, or build custom software to send sha-1 certs to some people and sha-2 certs to others.
Microsoft added support for sha-2 certs in the OS system stack with XP Service Pack 3, released in 2008, and Microsoft was always pretty slow with support on things, other platforms may have supported this earlier. A CA bundle from like 2005-2008 is going to be fairly limiting today. A lot of CAs back then had a 20 year validity period, which may have started 5-10 years before the bundle date. Of course, a lot of bundles today end in 2038, so we'll be screwed then.
Nope. It ran an in-house OS named "Delphi". There was a PDP 11/45 at LCS and a PDP 11/40 in building 38 used to teach 6.031 (the predecessor of 6.001).