A polyfill (in the form of a wasm blob that implements the decoder) would be an interesting solution at least, since it would make the chrome experience noticeably worse than any competitor that has native support.
Now all that's left is to generalize that and start a movement where we kill Chrome by making commonly used JS frameworks deliberately perform orders of magnitude worse on Chrome. (I understand I'm being absurd and that your suggestion is not actively malicious like mine is because Chrome is being stubborn in this case.)
Then again - with Chrome being the most popular browser - websites might not want to introduce a solution that results in worse performance for the majority of their users.
It’s like QR codes too. First it was invented… but no phone platform supported it natively, then WeChat in China built a platform and ecosystem within phone platforms and bundled a QR code reader which made it take off in China, then something like 10 years later Apple adds QR scanning to the camera app and we get pervasive QR codes finally. Saying “nobody uses it” is a copout if you are a de-facto monopoly.
That's not correct. Firstly, it had taken off in Japan a long time ago. Blackberry supported it in fact and I remember being at an intern recruiting event where the recruiter was smug about QR codes being on the cusp of mass adoption (being more than a decade early is wrong for this kind of stuff).
The problem was that the UX sucked. The UX today still kind of sucks but it's infinitely better than what was available when QR codes were first around by being integrated into the camera app.
The issue isn’t the QR, it’s the weird dynamic of a restaurant with a guy who shows you to a table, then you’re ordering from some stupid webpage.
For the price of eliminating underpaid waitstaff, the customer has a weird, slow ordering experience that cannot accommodate undocumented needs. The customer response is: fuck you.
Pretty sure China had no part in popularizing it and third party apps had support for it on iPhone a long time ago. I was seeing and using QR codes here in the US back in 2010 or so pretty commonly. They were limited to college campuses and SF bay area then but still were around plenty.
I remember being frustrated that my non-smartphone flip-phone didn't support them (and no way to add support for them either).
I lived in SF in the 2010s... QR code adoption was abysmal. I'm not saying China wis why we adopted QR codes, but because the QR code function was part of the main UX of their main platform (Wechat app), there was massive adoption of QR codes before the US started doing so in the late 2010s because our major platforms (Android, iOS) started integrating it into the core UX.
Devil's argument time. They add support and there's little adoption. Worse yet, it's a meaningful amount of adoption that deprecating is now it's own criticism cycle. So now Google is being forced to add support and maintain an image format because of evangelists?
I think the problem here is that W3C is lagging on listing which image formats are standard and vendors are shipping their own in-house tech in the vaccuum (and that W3C is composed of browser vendors means Google, Apple, Microsoft, Mozila & whoever is there from the community can't agree).
Does anyone have an estimate on how much work it is to maintain support for JPEG XL in Chrome? Your comment sort of implies that it's the kind of thing they'd need to really weigh the pros and cons of before pushing their chips forward, but my (unfounded) assumption is that, having implemented it in the first place, the maintenance would be relatively low, essentially a dim star in the constellation of things a browser manufacturer has to maintain support for. So, it just feels odd that, faced with at least some criticism from the community, and in the face of growing support for the format (e.g. from Apple) they would choose to dig their heels in on this issue.
Supposedly, the person who initially submitted the patch to add jpeg xl support has been keeping it up to date in the event that chromium wants to re-introduce it.
If that's the case (and if the implementation is sufficient, rather than a leaky prototype) it would seem that the burden is quite low.
This might seem crazy, but I can imagine a world where the first time my browser encounters a file of some new format it gives me the option to download a codec to render it and it doesn't matter if it's google software or not, just that it's signed with a trusted cert.
> I can imagine a world where the first time my browser encounters a file of some new format it gives me the option to download a codec to render it
That world existed, in the 90s and early 2000s, and it was called "ActiveX". The codebase= attribute of the <object> element exists for that use case. It sucked, for two major reasons: first of all, it was a reliable way to get malware into user's machines (being "signed with a trusted cert" doesn't help when all you need for a trusted cert is money and/or hacking a trusted software publisher). Second, it was very Windows-specific, and a significant obstacle to the adoption of alternative operating systems, hardware architectures, and browsers.
I don't think it was the idea behind ActiveX that made it suck, it was the implementation. Back when we were making Windows XP I found a ton of exploits where you could load an arbitrary COM component from a web page and buffer overrun it. If the same thing was based on modern sandboxed IL it would be a lot safer and more portable between platforms.
We tried the whole plugin thing with NPAPI and ActiveX, and it was a never-ending source of security holes.
This is one of those things that sound nice, but there's a lot of practical caveats, and in reality it would all be pretty complex. Who would provide the "trusted certs"? How do you decide between a "good and safe plugin" and "not so good and unsafe plugin"? How do you update these decoders? etc.
It's a C++ library that Google needs to secure and maintain security patches forever. If there's any bug in that library, it's Chrome and Google that will get blasted and blamed by rabid news media and HNers - so people here are essentially demanding that private company supports and patches their library for ever and ever. Once released, it's pretty much impossible to rollback support for media formats on the web.
Behaving like there's no cost in such maintenance is just whack.
> It's a C++ library that Google needs to secure and maintain security patches forever.
Compile it to wasm and run it in a sandbox? Or throw some rust groupies at it?
> Once released, it's pretty much impossible to rollback support for media formats on the web.
We must live in a time where flash still dominates short web animations, streaming providers use silver light and every enterprise user is stuck waiting for the companies 5GB Java applet to load right when IE6 suddenly pops up to kill everything with an ActiveX based Windows update that adds more video codecs instead of removing them from the system. The list goes on forever. Browsers are not above killing features and breaking things for security reasons or if they just feel like it.
Maintaining your own rust fork of a codec or forcing the web through another painful breaking change drawn out over years seem like fantastic arguments to wait until the ecosystem is visibly moving to jpegxl before committing to it, actually.
Google is a multi billion dollar company. They can’t spare a few developers to maintain a mostly isolated feature? (Changes and security updates to the JPEGXL will have no impact on the rest of Chrome.)
If after a decade no one really uses it, fine remove it - no one will care if it’s unused. But to kill it in its cradle is just BS. JPEGXL should have the same opportunities WebP had.
Who cares if there's little adoption? Why not be feature complete? It's not like Google is some cash-strapped startup.
Heck, macOS ships with terminal definitions so you can log into a Mac using a Commodore B-128 as a serial terminal. It didn't have to, and that's certainly going to have less adoption than JPEG XL.
(/usr/share/terminfo/62/b-128, if you're curious.)
Because this code will be exposed on every single webpage you load, and historically these kind of parsers have had a number of security problems.
Something something memory safety, but this is the situation that currently exists.
"All the image parsers" would be a very bad idea. Years ago there was a bug in some Linux setups where all gstreamer codecs were exposed in Firefox, and it was a huge problem (similar: https://lwn.net/Articles/708196/ – although the one I remember is much older, around 2010 or so).
It's not really comparable to all the old terminfo entries from decades ago: the vendor controls those terminfo entries, not $random_websites. Maybe the ncurses terminfo parser actually does have some buffer overflow (there's an entire mini-programming language in terminfo), but if it does it's not really a huge acute problem.
Features == potential bugs. That's why. You're increasing your attack surface for critical infrastructure for many - you better be doing a cost/benefit analysis.
Photoshop still can't open webp files, many websites still don't support uploading webp files. Do you not see how its annoying to drag a file out of Google image searches, see the extension is .jpg.webp then have to convert it back into jpeg to edit thus adding a 3rd layer of compression artifacts to the image.
> many websites still don't support uploading webp files.
On GitHub if you rename your WebP file to "file.webp.png" it will upload just fine, and will use the correct Content-Type header when serving. It's really frustrating a simple \.(png|gif|jpe?g)$ check is preventing the uploading of these files. I found this is actually the case for a number of platforms.
Honestly have 0 idea why I’m downvoted through the floor or why you’re responding aggressively. I definitely understand it’s bad, the message I hoped was clear is it’s a bad idea to support yet another format that will take a decade to perfuse.
Why is Adobe so bad at their job? GIMP added webp support about 5 years ago. Paint.NET added it 4 years ago.
> many websites still don't support uploading webp files.
4chan is the only such site I've heard of, and 4chan is notoriously bad at this sort of thing ever since they lost moot. It took them years to permit uploading of vp9/opus webms, for no good reason. 4chan's present administrators are incompetent, but 4chan is barely profitable (if at all) so that's not really unexpected.
But Adobe rakes in huge piles of cash, so what is Adobe's excuse?
> It took them years to permit uploading of vp9/opus webms
Did they finally do that? I left permanently a couple years ago (after being there regularly since 2006) because the small "these kinds of threads are why I still hang around this place" threads got more and more infrequent and the regular users got more and more annoying and less fun. Most of the site just became a constant pool of angry racism, cynicism, and paranoia. I can handle seeing stupid racism, but the death of fun and the constant angry sarcasm just got old.
Anyway, I was constantly annoyed that I couldn't upload vp9 webms, and that apng was also not supported given how much better than gif it was. Webp would have been decent, too.
The 'right' solution would be to just use system codecs for everything. Many apps need good implementations of image codecs. They just need to be implemented once by the OS vendor (or the toolkit on Linux).
Now somebody is going to say that's bad because it will fragment the ecosystem, and people will rely on something that works on one platform but not another. But you know what, that's not much different from something that works in one browser and not in another. Just file a bug against your OS. There are only finitely many image formats in the world. I'm sure Qt, .NET, Cocoa, ... have every format that you could conceivably need, and if not it should be easy to add them once.
> The 'right' solution would be to just use system codecs for everything.
That doesn't always work. VLC became as popular as it is partly through bundling its own selection of codecs for its own use, when getting certain things to play in some places was quite a faf.
If doing it the right way is hassle to the end user, we'll often sidestep rather than campaigning for fixes where they should be made.
Right. We already made that mistake with fonts and still only have provided workarounds (esentially the same as polyfills) instead of just standardizing a reasonable base set. This would be even worse with image formats.
> The 'right' solution would be to just use system codecs for everything. Many apps need good implementations of image codecs. They just need to be implemented once by the OS vendor (or the toolkit on Linux).
Windows has done this and is still doing this, but the decade-long track history so far is that this does not work well. It can work, in a very limited scope and if you have a lot of influence.
Sure, it's really nice if an 8K@60Hz HDR HEVC video plays perfectly straight in your browser or desktop app, but more often than not, it just won't. You don't have the right browser, the extension installed (due to license agreements), good enough graphics drivers or someone has forgotten a flag yet again.
And we haven't even gotten to the immense amount of variation each codec introduces or the potential attack surface.
In the end that "just" carries a lot of burden, it can't be the users reporting these issues.
It's just way easier to leech off of ffmpeg and similar, and let it deal with all the formats. Instead of hoping that maybe you can leverage what the OS gives you, that it works and works correctly in all your edge-cases.
Though not everything is that gloomy, there are Vulkan extensions that might (in the future) simplify cross-platform image and video decoding (and HW acceleration).
That's a great narrative, except that in this case the "someone" in point 1 and the "browser developer" in point 2 are the same company, and the "competitors" from point 4 are not supporting the feature either.
Maybe let's wait for Apple to actually announce the details (which neither the cropped screenshot or the talk abstract have), and give the Chrome team some time to react and make a statement. Rather than have the 10th rehash of this on HN with an incendiary title.
that is exactly what Google hope Chrome to become: the IE6 of the new era. everything they've done for the past few years points to this, including decisions like this one.
There are very few people working with image codecs, and you will not find a codec "virgin" among people responsible for these subsystems.
One of the main JPEG XL contributors is a WebP contributor and a Google employee. If JPEG XL got shipped first you could make the opposite conspiracy from that!
No, not really. If JPEG XL had a time machine and made itself deployed on the Web before WebP, I'm pretty sure nobody would want WebP in. It'd be an objectively stupid idea to adopt WebP having JXL available, and I'm pretty sure even WebP authors would agree with that.
The Web is not supposed to be a Katamari Damacy of codecs. It doesn't need multiple redundant or worse ways to do the same thing. There's a high cost of adopting a new format Web-wide, so it's rational to do it rarely and only when benefits outweigh the costs.
Thats a totally different scenario though. AVIF and JPEG XL both have gaps, contemporary adoption makes sense. Its like trying to argue "we dont need PNG, we have jpeg"
That's frustrating. Having a royalty free, high quality, video codec is great. But it shouldn't prevent us having a significantly better still image format.
The referenced comment does not even take into account that also the lossless compression is more efficient than PNG. This and the easy handling and efficient storing of (short) animations could in total combine the three major image formats: JPG, PNG and GIF.
Even when gifs are now mostly replaced by webm and HTML5 viideo tag, the unification of image formats for both orthogonal uses (natural images vs. technical or generated images) is a big advantage.
I'm still disappointed that you can't use (silent) videos in <img> tags or CSS properties where you can use images. Animated webp is so much less efficient than webm its not even funny. There are many places where you can already embed images (including GIF and animated WEBP) but video is not supported or has other restrictions.
But you can already do that. AVIF animations are regular AV1 videos. There is zero end user visible difference between what the above user is proposing and what AVIF does. Allowing webm in img tags would be identical.
I think the big difference since the recent threads is the news that Apple is adding support for JPEG XL to Safari.
For HEIC, I understand, but it's weird that they are the only ones supporting JPEG XL while it is a royalty free open standard and the open source browsers don't support it.
I am aware, but note that Safari relies on ImageIO for image decoding and AVFoundation for video decoding, two components of the proprietary OS. Chrome and Firefox on the other hand ship a lot of their own decoders. Those parts are not open source (nor is the GUI, while Chrome is open source also from the GUI point of view, it's really just a bunch of google-custom modifications/extensions that are closed).
Posting obnoxious, demanding messages on the Chromium bug tracker is not the way to win friends & influence people. It's just an image file format for Pete's sake.
"Experimental flags and code should not remain indefinitely"
Why wasn't AVIF hidden behind an experimental flag?
"There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL"
This is gaslighting. The representatives of companies such as Adobe, Facebook, Intel, The Guardian or Shopify have voiced their support for the format. Not to mention the countless individuals. I don't recall AVIF getting anywhere near this level of interest.
"The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default"
BTW, this article doesn't even mention all of the JPEG XL's advantages. For instance, it doesn't mention the high resolution support. AVIF images are limited to 65536x65536, but images larger than 8192x4352 must be tiled, which results in border artifacts between the tiles. JPEG XL, on the other hand, has no problems with images up to 1073741823x1073741823.
Way to be reductive. It's a very thorough summary of the situation, hardly low effort or obnoxious. If you think it's 'just an image format' then you aren't the target audience. Move on. This was a very controversial decision that could have long lasting effects on the industry. See the rest of the linked thread if you are actually interested in why that is. If not, why comment?
How is that message obnoxious? It is a well thought-out message which clearly demonstrates the user's good technical knowledge and passion for the topic.
I can't believe how soft the general consensus on Internet discussion has become. We will never get anything done if people consider this level of mild and sensible criticism obnoxious.
A lot of the subsequent recent ones are, I wasn't really talking about the original ones. Various randoms are brigading the bug in the misguided belief that it will achieve anything.
It's not really obnoxious, even if it were, it's ridiculous to expect people to simply silently tolerate all the decisions a browser with a share majority makes.
macOS Sonoma will support JPEG XL, so presumably all Apple platforms will, come September or so. OP is blogging by title edit, less than 24 hours later, about how Chrome hasn’t reversed their decision.
Saying they haven't reversed their decision sounds like they affirmed their previous decision in spite of changing circumstances. Nobody popping up and saying something is more "nothing's stirred publicly on the Chrome front" than "Chrome still hasn't changed their opinion."
I agree that it’s a misleading headline, but sadly there’s nothing more I can do to remedy that, other than clarify for others in a comment (and flag the post).
Half joking: in the world where Google added JPEG XL using the normal process, the reaction would vary from mild disinterest to outright disapproval. "Google is forcing yet another image format down our throats!"
But by adding and then removing the feature, they've made it a competition. Now JPEG XL is building grass-roots support, and if/when Google relents and adds JPEG XL back, the feature will have much higher support than the boring way.
It isn't competing with Google on Blink development, it's simply adding support for the new JPEG image standard (which they're probably going to have as a system codec anyway in that case). Note that they've had HEVC playback support in Edge before Chrome.
This terse comment from the bug is an accurate summary of how projects work at Google, whose side effect was JPEG XL's removal:
> The code has been removed from Chromium (comment #281), I'm closing this bug for now. If leadership revisits the decision [1] it can be reopened.
There you have it--the people making decisions are out-of-touch, likely-non-technical managers, not engineers. Engineers are the ones writing the code and shipping features. Why not empower them to make these decisions?
Without a doubt. If you're lucky, they might know the difference between a JPEG and a PNG. This is a technical discussion at its core, you wouldn't expect Chrome's actual users to know about Flexbox or ES6, either. There's little you can't do without these, but things are transparently better with them anyway.
Give it a minute. The news that Safari would add JPEG XL just dropped yesterday. Given that big change the decision may be reconsidered, but not in one day.
I would really like to start converting some of my personal media and websites to use JPEG XL, but the momentum doesn't seem there yet - despite clear technical and practical benefits.
I just went ahead and did it, then included fallbacks to supported formats. What I really want is a standard way to let the browser request lossy or lossless images.
I'm curious whether Apple will eventually move away from HEIC for their camera roll and other internal uses. Now that would be an incredible win for royalty-free codecs.
As for browsers, this looks like the competition that's needed to convince Chrome. If it gains adoption and gives Safari a performance/quality edge that users notice, Chrome will have to follow.
For browser vendors, all web-exposed code is a maintenance cost, security risk, and compatibility risk, so they generally don't add stuff just because it's nice. But they do add stuff to beat their competitors.
We may have to wait for hw jxl encoders for that. Which is likely to happen, but it will definitely take time.
Meanwhile there will be software options for authoring, e.g. Adobe Camera Raw.
For the web, the trade-offs between encode speed, compression , and fidelity consistency are quite clearly in jxl's favor. Avif still has the advantage in terms of support of course, but deploying jxl just for Safari already makes sense for many use cases. When Chrome follows, it will become a no-brainer.
JPEG has been hitting its limits for an extended period:
JPEG can only do 8-bit color depth, no HDR.
JPEG can only do lossy, with no lossless support
JPEG lacks good compression for graphics images
JPEG cannot do alpha transparency
JPEG cannot do animations
JPEG does not support multiple layers
JPEG compression efficiency is 30 years old and not as good
JPEG comes with annoying compression artifacts we all love
Banding, Noise, Blockiness, and more banding.
> if there is any advantage to JPEG XL over WebP or [AVIF]
Of course there is. Better compression than WebP, and unlike AVIF, it supports progressive decoding, which is super important for users on a slow network. Although AVIF can sometimes produce 50% smaller size files than WebP, many site owners will opt for WebP anyway because it has progressive image decoding, so their website will display something while an image is nothing rather than nothing until the whole image is loaded. With that said, JXL achieves comparable compression to AVIF, and it suffers way less from generation loss, too.
Going mostly by a quick check of the wikipedia descriptions JPEGXL seems to live up to the XL part: Larger image sizes, more bits per channel, more channels ... . WebP and AV1F seem to inherit size limitations from their corresponding codecs AV1F for example will start tiling with visible artifacts at 4/8k (depending on direction) to support larger images.
With that Achille's heel in tiling and the slow and PoC-tier encode and decode performance, I'm amazed AVIF has gotten the adoption that it has. Add in that in lossless mode JPEG-XL is almost always smaller, sometimes by up to 4x, and I'm just not sure how it's even a competition. AVIF has so many compromises baked in that didn't need to be made.
Basically lossy WebP and AVIF are video codecs, which are designed for the kind of quality you want when you can only see an image for 40 milliseconds or less. For still images, higher fidelity is usually desired but the video codecs tend to struggle to even keep up with the old JPEG at those operating points.
Similarly, video codecs are not designed for progressive decoding (rendering previews of a frame based on partial data), because that's a feature that doesn't make much sense for video. For still images on the web, progressive decoding is considered a desirable feature though to improve the user experience.
- In my own testing JPEG XL demolishes AVIF and WebP in terms of bitrate at my desired quality, particularly for artifacts around sharp edges.
- This can (in theory) be solved in other formats by improving the encoder, but the current jxl encoder is pretty much "set it and forget it" in terms of getting a good quality; other encoders are far more variable (e.g. I would use a different quality setting for B&W vs color and line-art vs photo).
- In others' testing (I don't use this feature) JXL has better lossless compression.
If you're referring to the first answer, the last update is almost a month older than the article I posted. Are you referring to something else?
If not, are you suggesting the article I posted is lying? There are a bunch of articles on this topic/flag, which were posted on the same day as the link I wrote earlier (17th of April).
I just tried in todays Edge Canary - the AVIF image is not shown correctly, but the flag does make Edge attempt to load it. Since it's the Canary branch you know there is a good chance it's just broken in the current build, right?
I don't know why HN is so obsessed with this particular image format. Adding image formats on the web has a really high maintainability cost and there is a fairly reasonable argument that it wasn't worth it in this case.
It's better WebP, without patents and license fees (so unlike HEIC). There are also advantages to encode and decode speeds compared to other formats. It's a superior format compared to most alternatives in many situations.
Adding the format and maintaining it requires some work, but the potential for load speeds and data savings is huge. It's also finally a somewhat efficient format for bitmap animations after all these years of GIF and APNG. I've also run into the 16k size limit while converting some PNGs to WebP myself, to JPEG XL would be a nice way to losslessly compress those images more efficiently as well.
That said, so far WebP is serving me fine in most cases, I don't really care if it takes two weeks or two years for JPEG XL to make it into the mainstream.
It doesn't matter that it's better than WebP, because that isn't the alternative. The alternative that browsers have shipped is AVIF. JPEG XL's advantage over AVIF is less significant. It's not a clear win, but diminishing returns and nice-to-have features vs AVIF's wider deployment and head start in AV1 implementations.
The analogy makes no sense, given JPEG XL was primarily created by Google. That's what makes the reaction so strange.
If JPEG XL had been enabled by default in Chrome instead of removed, would HN be happy? Or would we be up in arms about how Google is trying to force yet another image format on the browser ecosystem?
JPEG XL is a collaboration between a team at Google Research Zurich and Cloudinary. AVIF is no small part developed by Chrome developers. Treating Google as a unified monolith doesn't make any sense. It's perfectly possible for Chrome to behave like 90s Microsoft and for the Googlers in Zurich to do completely different things.
Looking forward to when edge services like Cloudflare, Cloudimage, imgix, etc. support JXL too. AVIF has such a harsh encoding time. It is also a bit too bulky in wasm on the edge to encode yourself too.
So far the blocker has been lack of alternative implementations of JPEG XL.
A big complex format implemented in young C++ codebase is scary to deploy. Additionally, everyone using exact same code creates a risk of files being "bug-compatible" with that code, rather than conforming to the spec.
There are now multiple alternative implementations in the works, including in safer languages, so hopefully it will be easier to deploy in the near future.
I was talking with a Googler on Mastodon the other day who was astonishingly ignorant about how his own product works and blaming me for it. If they were going to put a definition for "gaslighting" in the dictionary it might be good to put in Google's product evangelism in as an example. There was that time I met Bing's developer evangelist for search at a conference and told him that if he was Matt Cutts I would have called room service and ordered a cream pie.
He's seeing with visible light, I'm seeing with X-Rays.
I've been in the trenches for web development (and a little bit of web browser development) since 1995. I see the social factors around browser development (very little diversity) and the supporting software as having a direct link to the very difficult problem of a web browser not blocking the UI threads when it is updating a page from concurrent data sources including the net. Struggling to get a post-Netscape web browser to run on Solaris so our Sun Ray installation wouldn’t be useless…
I don't appreciate being quite literally dehumanized. It's the first step to the gas chamber.
A "dev evangelist" for Chrome. Amazing that somebody with a job that involves communicating with the public would be so rude. Seems to have stolen his playbook from the bullies that chased me around the playground in elementary school.
I’m glad that you recognized that YOShInOn fulfills its brand promise.
I’ve cultivated certain kinds of randomness for a long time, for instance you will not see me post a string of links to phys.org or world nuclear news to HN but rather mix it up. YOShInOn is my better half and is less hostile than I am even though eliminating hostility was not a primary goal, the source material it works from is low in hostility (probably the worst input is The Guardian) and my curation process (I look at 100% of everything it outputs) has a further cooling effect.
I am already bugged out by the angry p̶e̶o̶p̶l̶e̶ toots on Mastodon and seriously thinking about making a hostility filter.
The post by "@jaffathecake" reads like a truism. The main thread is indeed where the main thread business like (re-)layout happens, not just Javascript.
I have no idea why that would cause OP to act like a Texan boomer just spotted a gas station clerk wearing an N95.
For the next few months at least, it looks like JPEG XL is just going to be one of those formats for Apple users, like HEIC. With people complaining about programs not supporting WEBP all the time, JPEG XL isn't going to gain much popularity in the mainstream either.
At least people supporting Safari will be able to make use of the format for faster load times soon; the <picture> element makes the transition quite painless after all.
An interesting difference with the WebP situation is that any app that uses Apple's codec framework on iOS/iPadOS, which is almost all apps that deal with images, will support JXL automatically.
Actually, I just downloaded a JXL via Safari, and after confirming that it was still a JXL, I tried using it in a context where JXL isn't supported, and it automatically turned into a JPEG.
JPEG XL is a file format for large raster images that provides a better compression ratio and higher quality than JPEG.
Google owns the license to another proprietary image format for large raster images (WEBP).
Google used to support JPEG XL in chromium, but dropped support for it. Many people believe this is specifically to manipulate a market advantage for WEBP.
WebP has a patent license and anyone can use it. People have been using and implementing it for over a decade, without problems. WebP also isn't even comparable to JPEG XL: it's the "old next-gen" image format – it's between WebP and AVIF. There is just no incentive for Google to push for WebP: they have nothing to gain by it.
Chrome never properly supported JPEG XL; it was hidden behind a feature flag, which of course almost no one enables. Their concerns are also valid: "we don't want to expose users to security-sensitive code without significant benefit for the users". Supporting as few as possible image formats is essentially a good thing for everyone; see e.g. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=libpng
Google has banked on AVIF, which has quite a bit more traction. I do think that JPEG XL would be a good addition, but I also think it's reasonable to have a different opinion on that.
Everything about this post is wrong. The "many people who believe" this are engaging in nonsensical conspiratorial thinking without any evidence, which is pretty much what I've come to expect on HN about anything Google-related.
> WebP also isn't even comparable to JPEG XL: it's the "old next-gen" image format – it's between WebP and AVIF.
This is a nonsense statement of opinion.
AVIF is severely more computationally expensive than JPEG XL (over 100 times worse at high bitrates), limited to 4k resolution, has worse compression, and is missing desirable features like progressive decoding and lossless recompression.
> There is just no incentive for Google to push for WebP: they have nothing to gain by it.
They have complete control over the standard. No one else can make changes to it without their approval. JPEG XL is governed by ISO.
I don't know if Chrome has a feature like that, but if it does it doesn't seem to have been used for the libjxl integration.
Lots of things are possible in principle; it's also possible to write rs-libjxl which would significantly reduce the problem. But there's a difference between "what's possible" and "what the reality is today", and in the context of what the situation is today it seems to me the concerns are valid.
You're misleading with comparison to WebP - the image format all browser vendors supported (including Chrome, Mozilla, Safari and Microsoft) that competes with JPEG XL is AVIF.
Only Google can make changes to the WebP specification and can do so unilaterally. The JPEG-XL specification is governed by a recognized standards body (ISO).
Why browsers can not have features as binaries on OS? For example, I can drop Ogg Vorbis support from my Debian but one is a nightmare - if Debian's devs would manage to dictate me what codecs I can use and what I can not.
Linux distributions are free to distribute their own builds of Chromium with whatever features they want. It's just that distributing a fork of Chromium, even with minimal changes, is a full time job for several people at least, due to the fast pace of updates and the security critical nature of browsers. And people prefer Chrome anyway due to the features that rely on Google servers like sync or translation.
The wasm implementation is too slow for mainstream adoption.
It also has to decode the image to a format supported by Chrome, which means you lose a lot HDR abilities. JpegXL can go to 32bit, but the supported formats max out at 12.
webP and avif are basically video codecs that can eventually render a frame as a static image and are good at masking low res artifacts.
On the other hand, JPEG XL is for static images only and is way better at rendering fine details.
It has a bunch of specific features and optimizations. if you want to know more in details, check this post: https://cloudinary.com/blog/the-case-for-jpeg-xl
The big one is that you can transcode existing JPEG files effectively and reversibly to JPEG XL without any additional loss. So you can get the benefits of the new format without having to re-sample your existing JPEG images.
TLDR in the article is that both technologies are valid and have a significant use case. I still work plenty with the browser and would _absolutely_ use JPEG XL as a replacement for WebP images, traditional JPEGs, and PNGs if I could.
It's a shame Google/Chrome is not supporting the tech. It would be a major improvement in the landscape.
The TLDR version is AVIF is better at low bitrates/quality, JXL is better at higher bitrates/quality. And JXL has some other niches, like easier encoding/decoding, lossless conversion of JPEG, and support for some exotic formats that AVIF does not support.
It's already been implemented. Chrome used to support JPEG XL if you enabled the hidden flag. The author of that code is still keeping it up to date in case Chrome wants to merge it back.
That mythical someone you speak of has done all the correct things already. This is squarely Google not wanting that feature.
Chrome and Chromium are the same dev team. Open source doesn't mean that they'll accept any pull request. It was implemented in Chromium, as an experimental feature, and then removed.
Excellent. This is a good approach. The most hilarious way to deal with Google's lack of leadership would be to build out a browser that rivals their offering on top of their own core.
Well, sorry you think that. I wrote the product exactly because I was tired of hitting browser limitations in previous projects, but the alternatives were too difficult and so people would sometimes just give up rather than tackle the deployment problem.
This thread is about people upset about browser limitations, something that's trivial to fix when not writing web apps - just add libjxl into your app and use it.
So that's why it seems relevant to me. Yes, it doesn't help if what you're doing has to be a web page, but often there's a choice.
The specification was frozen "only" in December 2020. It is enough time for a lot of software to add support to it (Gimp, ImageMagick, Krita, FFMpeg, Adobe Camera Raw).
But it will take a long time before camera start using it. And since a lot of popular graphic software cannot export to JXL yet (photoshop, clip studio, lightroom, ...) most professional just can't use JXL (short of saving a lossless image and then converting it to JXL in another software).
And in general, since the format is pretty new, most people don't know about it.
That is why being able to use it for the biggest browser would have been huge for speeding its adoption.
Software support for WebP was also a massive problem up until the last few weeks. Because Chrome forcibly converts image files to WebP, it made my Photoshop work a hassle.
I’ve mainly had it on png, but yeah! When I open a png, mainly on Fandom sites, the URL will say its a “.png” extension yet will only attempt to download as a “.webp”.
If you use a service like Cloudinary then it will detect the appropriate image format to serve to the browser based on the request. I would expect JPEG-XL to start being served to Safari clients once macOS 14 and iOS 17 are released.
Chrome can hold out but ultimately all they are doing is hurting their users.
You don't even have to use a service. A picture element containing sources by priority, and then finally an img fallback for the obsolete browsers, allows you to use the best possible codec that the client supports.
Certain company that commented on the thread already turned their image processing pipelines to use jpeg xl instead, given the significant bandwidth savings. Now all they got is the significant hole in budgets since it may never pay off.
We can't know how many are using jpeg xl since there is no available metrics to find out.
Good thing JXL has the best progressive support of any image format available! [0, 1] Something which you can only find partially with incremental decoding in WebP and doesn't really exist in AVIF (just one of the many limitations of being bolted onto a video codec). You might often reconsider having both versions of a file with this capability. (which wasn't exploited much or at all with "dumber" and heavier progressive JPEGs)
With the high cost of cloud storage I see it as a pretty big burden to host multiple copies of an image, particularly if you feel compelled to have multiple optimized images such as JPEG, WEBP, AVIF, ... That's why I didn't start using WEBP for images until support was universal enough that I could give up JPEG.
(For a large image collection you have some images that are heavily requested and most of the cost is network transfer, but you also have many images that are infrequently requested and for those the cost of storage in the dominant factor. I launched a large image collection in the late 2000's and many sites that did the same thing at the same time were ultimately crushed by running costs, Pinterest and Instagram were survivors, overall the economics of video collections turned up to be better than images.)
We're currently generating JPG, WebP, and PNG, depending on the source and the target. The problem with doing this on-demand is that formats that are great for delivery but slow to encode aren't useful. I did try to add AVIF, but the latency on the first request was too high for it to be practical.
I guess "preheating" cache with preemptively encoding and putting new content in cache before it is shown to client could work?
But still, anything AV1 based just needs a lot performance to encode that just doesn't feel worth it at the moment. Our system to encode videos was tooled for that but developers eventually decided it's too slow/cpu hungry to bother. It was some time ago tho, encoders did get better...
In internet-scale systems complex delivery systems involving transformation and caching are not necessarily a win for performance insofar as once you give up a millisecond of latency you cannot get it back. You very much can get a win out of a CDN but somebody has to do aggressive tuning of absolutely everything.
It's not for performance but for not having to permanently store terabytes of differently encoded data.
You could possibly also pre-populate cache - all new media files get sent to encoder and cached preemptively, and if they are popular enough they just stay there till they are not.
1. Someone comes up with a cool feature.
2. Browser developer refuses to incorporate it since "No one uses it!". Pushes developed in-house technology instead.
3. No one uses the feature as a result. "See no one wants it!"
4. Competitors start to implement the feature
5. ???
And no, pollyfill is again not the (right) solution.