Hacker Newsnew | past | comments | ask | show | jobs | submit | lukegb's commentslogin

Open access rail operators are not permitted to compete with franchisees (on the same routes/proposition) - there's a test that is applied before granting a license to ensure that the services will be "not primarily abstractive" - that is, that the operator will generate new revenue rather than simply taking away from the franchisee.


MLS is currently being adopted by Google Messages for RCS (https://security.googleblog.com/2023/07/an-important-step-to...)

(I'm a Google employee, but don't work on this, not speaking for the company, etc etc etc)


It's still there, although it too got rebranded: https://sitereport.netcraft.com


It does if it has to go through the Windows Defender check. Enough that sometimes I end up launching the same thing multiple times because I get gaslit into assuming I haven't launched it.


According to the first post in the linked thread, the switch to clang is transparent enough that the VS IDE-based debugging experience still works.

My guess (not being a Chromium developer myself) is that this is an exercise in consistency across platforms - I can imagine that using the same compiler on all supported platforms makes the language features supported consistent and removes the risk of one compiler generating much worse output for the same code. But I'm just postulating, I don't actually know for sure.


As long as the other application is Firefox, Safari, IE11/Edge, or Opera, then it probably has a HSTS preload list that is at least in part generated from the Chrome one.

Firefox have some scripts which go through and check to make sure everything still on the Chrome list is still announcing the preload headers, and will autoremove if that isn't that case, IIRC. I wouldn't be too shocked if Apple/Microsoft were doing something similar.


"As long as the other application is Firefox, Safari, IE11/Edge, or Opera, then it probably has a HSTS preload list that is at least in part generated from the Chrome one."

Is there any documentation for these browsers that officially say exactly what they're doing and how their preload lists are generated?



As an interesting, although unrelated aside: platform data is available in the UK's National Rail's feeds, but the terms and conditions[1] explicitly prohibit displaying platform numbers early, as mentioned in their developer guidelines[2].

I guess the wording doesn't technically ban you from displaying historical platform information, but that would likely be a bad-faith use of the data anyway...

[1] http://www.nationalrail.co.uk/static/documents/Terms_and_Con...

[2] http://www.nationalrail.co.uk/static/documents/Developer_Gui... "Occasionally Time-Bound Data will become available through NRE feeds before it is ready to be published to the public. [...] One such example of Time-Bound Data is platform numbers. Early display of platform numbers, particularly at origin and destination stations, can lead to platform overcrowding and/or staff not having sufficient time to prepare the train for oncoming passengers. In some instances, platform numbers will be available in Darwin before being displayed on screens in stations."


UK rail data is quite open in comparison to this story, for instance check out these live track diagrams (of London Waterloo & many other areas) which show real-time movements of trains. This is how I often find out the platform numbers prior to travel and I also get information before staff during disruption.

http://www.opentraintimes.com/maps/signalling/wat#T_WAT


There are maps of many more routes at traksy https://traksy.uk/live/M+2+CARLILE


Great to see another good implementation of rail maps, I do however prefer the opentraintimes layout as its easier to scroll through while reflecting the look of operational systems.


Once upon a time, the platform used to be displayed in the National Rail app, even when it wasn't shown on the departure boards…For a long time I used that to good effect to get a seat on rush-hour trains at Paddington.

That changed a few years ago, interesting to see the formal policy behind it, and an idea for a side-project :-)


That might be because they're the CA underlying Cloudflare's automated SSL issuance - at least for free tier customers.


Google's also been looking to limit the maximum validity lifetimes in general through the CA/B Forum[1] in a ballot that ended up not passing (with hints[2] that Chrome would end up enforcing something similar itself even if it wasn't part of the Baseline Requirements).

This seems to be indicative of the general indication that Chrome wants to head in anyway[3].

[1] https://cabforum.org/pipermail/public/2017-January/009373.ht...

[2] https://cabforum.org/pipermail/public/2017-February/009746.h... - there was a more explicit post elsewhere but I can't find it in the archives right now

[3] https://twitter.com/sleevi_/status/829804370900426752


> with hints[2] that Chrome would end up enforcing something similar itself even if it wasn't part of the Baseline Requirements

Kinda undermines the idea of having a standards group if Google is going to strongarm the industry by doing their own thing anyways


"Baseline Requirements" sort of implies that what is specified is minimum, rather than exhaustive, rules, and that specific applications will have additional rules.


And they all do.

Mozilla's rules are public so you can go read them, and indeed you can help write them. But most famously they required all CAs to disclose loads of stuff, and they require CAs to do lots of stuff in public where everybody can see it, not behind closed doors where we don't know what they're up to.

Google's rules include lots of stuff about their Certificate Transparency idea, which has helped no end.

Microsoft's rules famously include them getting a veto where they can order any CA to revoke a certificate or else leave their trust programme. They mostly use this to zap malware / phishing sites.

Apple's rules forbid having lots of roots at once. Although apparently this didn't apply to Symantec, or various other people. Huh.


The standards group in question is unfortunately impotent. Two totally reliable voting blocs: the browsers and the CAs. There are more CAs than browsers, so the result of every vote is in the favour of the CAs.


As was noted above, they have a constituency representation system so ballots have to pass with support from both constituencies, rather than an absolute majority of members.


You wouldn't want a standards group that somehow mandates all clients must accept all valid certs, right?


> Kinda undermines the idea of having a standards group if Google is going to strongarm the industry by doing their own thing anyways

In any industry where a single actor has a clear majority of the market share, you either vote with your feet (and implore all your friends to do so as well) to bring the powers back into equilibrium, or you cross your fingers and pray that Goliath is (and remains) benevolent.


I expect the browser to look out for my interests as the user. How could the browser do that if it lets the CAs set the terms in the CAs' favor?


Ballot 193 subseqently passed, limiting maximum (Web PKI) certificate lifetime to 825 days from March 2018.

That's definitely not enough to keep Ryan Sleevi happy, but it may be enough to buy CAs a bit more breathing space.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: