Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> so it’s hard to make the case for keeping it.

How about “not breaking stuff” which can not be upgraded? Like old sites/services without active maintainers but still useful. Or hardware appliances that still work, but will not get firmware update ever. Let alone rss feeds, brought up multiple times in the linked thread.

Looks like builtin polyfill (similar to pdfjs in FF) would do. But google seems to be reluctant doing it.



When’s a reasonable time to pull the plug on out of fashion legacy stuff? Things can’t always remain backwards compatible forever. I think the places this is still in use can build contingencies where required


Why can’t things remain backwards compatible forever? In the 35 years that the Web has existed, browsers have come pretty damn close to meeting that standard. The one huge exception is the removal of plugin support around 2015, and the concomitant death of Flash and Java applets. There were also some major browser-specific APIs that got killed off, like ActiveX and NaCl. But when it comes to standardized, browser-native functionality… very little has ever been removed. I would prefer it if I could say the same thing in another 35 years.


> Why can’t things remain backwards compatible forever?

I already said why:

> The complexity and attack surface area isn’t justified by its utility, so it’s hard to make the case for keeping it.

If you read the GitHub issue that this submission links to, the issue points out security vulnerabilities and links to:

> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.

https://www.offensivecon.org/speakers/2025/ivan-fratric.html


for those who have been looking, the actual presentation where they talk about this appears to be here: https://youtu.be/U1kc7fcF5Ao


Things can remain backwards compatible forever. That is what any good standard does. Web standards and much else in software is sadly a complete mess where too few care about all the downsides of instability.

I am a bit worried because for many years I used plugins like SinglePage to save web pages as HTML. That is not exactly future-safe since every relase of Chromium or Firefox has a list of things that were deprecated (and a list of things that changed, that might or might not break rendering of old pages). Old saved pages will eventually begin to degrade and some might eventually be unreadable without having to mess with virtual machines to run old browsers.


> Things can remain backwards compatible forever.

This is exactly the attitude that has left us with only three complete extant implementations of the web, two of which are controlled by an ad company.

Indeed, to me it seems that at some point, you either have to

a) freeze the standard

b) drop old stuff

c) accept that there is no standard

and with the web as a whole, we are firmly headed towards option c). So I find the short-sightedness of all people pushing back against this proposal unfortunate.

(Also note that dropping a barely-used Turing-complete language from the web is not comparable to removing deprecated HTML elements. The latter typically requires just a few lines of CSS in the UA style sheet, so I doubt anybody is considering doing that.)


As if oligopolization of the browser space wasn't a deliberate objective of the WHATWG.

I would rather have them deprecate the HTML syntax, which is a nightmare to parse, nightmare to escape for ( https://sirre.al/2025/08/06/safe-json-in-script-tags-how-not... ), and nightmare to securely transform (CVE-2020-26870). Now that MSIE is dead, all mainstream browsers support XHTML just fine; compared to HTML, XML is much, much simpler to make a new implementation of; and few people generate markup by printf any more.


> As if oligopolization of the browser space wasn't a deliberate objective of the WHATWG.

So why is it that when once in a blue moon the WHATWG tries to do something that also happens to help new implementations, web devs come in to cry bloody murder? Maybe there are co-conspirators to the scheme? :)


> This is exactly the attitude that has left us with only three complete extant implementations of the web, two of which are controlled by an ad company.

No, adding complex new interfaces and then demanding that every browser implement them quickly is what does that. Google is not proposing to reign in that behavior.

> a) freeze the standard

Yes, or rather new features should become rarer over time.

> and with the web as a whole, we are firmly headed towards option c)

Which has nothing to do with backwards compatibility but with Chromes appetite for adding new APIs, presumably with exactly this outcome being their goal.


> No, adding complex new interfaces and then demanding that every browser implement them quickly is what does that.

New interfaces forced into the spec without concern for complexity have led to the demise of existing browser engines.

But the lack of new implementations is also a result of the insistence on keeping all obsolete interfaces, no matter how complex or how little their remaining usefulness, at all cost. (Looking at you, document.write...)

> > a) freeze the standard

> Yes, or rather new features should become rarer over time.

This is far more unrealistic than dropping XSLT support.


Thing is the new APIs added to the web are generally quite well specified with quite good tests. These older legacy APIs sometimes don't even have a standard and almost definitely don't have interoperability (XSLT is an example of one that definitely does not work consistently between WebKit/Blink and Firefox).

For a new browser engine adding some of the new APIs is pretty trivial compared to debugging all the nonsense that comes from these kinds of underspecified legacy APIs. Removing XSLT from the spec and existing browsers means new ones don't feel the need to implement it. They don't need to decide which implementation to go with (use libxslt like chromium and webkit and you might match their behaviour but you also get all the same security vulns).

Frankly a modern engine could probably get by without handling XML entirely (aka no XHTML document support) and get by just fine but that's a separate discussion.


If you want to build a stable platform: never.


Absolute stability in that manner is nearly never the goal. Plenty of web standards have been deprecated. They gently push things in a direction and cull what is seen as low value or high risk.

I'm not advocating for or against this specific item, just saying we shouldn't perpetually add and bloat future maintenance demands just because we want to support every single thing that's ever been built. We should be able to remove/delete/deprecate in a way that allows reasonable notice to those that could be effected. I'm certainly not advocating for sweeping breaking changes like may be found in some web frameworks, etc. We should expect that browsers move slowly. But there still needs to be some process for culling things IMO.


The web platform has repeatedly removed features like this in the past, and it’s the most stable platform in the history of the computer industry.


Lets remind ourselves that thanks to Google we also did not got WebGL 2.0 Compute, it was too much for Chrome team to spend their resources between WebGL 2.0 Compute and WebGPU.

How great that five years later WebGPU is something we can rely on in portable way. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: