From my own tests. JXL is better for photos and competitive with AVIF for images with flat areas of color (PNG-like).
JXL is about the same speed as AVIF but WebP is quicker than both.
I'm not very sure how you have compared the speed. `cjxl -e 9` for example is known to be very slow because it is AFAIK basically `-e 8` with a brute-force parameter search (and I think this level should be hidden in some way for this exact reason, it is very misleading). Also I assume that you have done an end-to-end command line test with relatively small inputs, which might be a major pitfall (because it doesn't only measure the PNG encoder/decoder but also all program invocation time and finalization time, which can be specficially optimized [1]).
Also only relying on a single objective metric might be misleading, and Jon Sneyers (one of JPEG XL devs) has a very good plot that illustrates this [2] and pointed out that actual images have to be also manually reviewed to reduce mistakes. Many comparisons are made against images with similar SSIM scores, but it is unclear if they indeed had similar qualities. And an ideal perceptual metric is not enough, specifics on "taking the AVIF format points as a basis and comparing them with the similarity and similar ratio points" would be necessary to evaluate the test.
[1] For example you can skip `free` on complicated structures because they will be reclaimed by the kernel anyway. Of course this is totally unacceptable for libraries.
[2] https://twitter.com/jonsneyers/status/1560924579719258113 (The ideal metric should strongly correlate to the human score and should not distort the relative score difference. Most metrics---except for SSIMULACRA 2 which specifically being developed with this dataset---are bad at both.)
Instead of using a breakpoint style approach with several predefined sizes some site generate images related to viewport size at very fine granularity and even changing the viewport by 1px will cause the AVIF to be regenerated
In these cases you can notice services like Cloudinary take several seconds to generate the new variant if it’s a large image
I always confuse it with JBIG2, infamous from NSO's Pegasus zero-click iMessage exploit:
> JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.
> The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream.
Do give it a try! In our tests we found that sizes are smaller and encoding of HEIF takes very very long time. Remember, we provide bunch of other things like image resize and crop etc so we always end up encoding image again. Encoding in HEIF is very slow.
> Specifically for JPEG files, the default cjxl behavior is to apply lossless recompression and the default djxl behavior is to reconstruct the original JPEG file (when the extension of the output file is .jpg).
JPEG-XL is a fantastic format but it has a big marketing/naming problem. I think the the lack of popularity/adoption is in big parts due to the confusing name.
It sucks that there are several other combinations of 2 letters suffixes that are also taken:
JPEG XR (Windows Media Photo, hdr), JPEG XS (a "realtime" one) and JPEG XT (a "very backwards compatible" one). So besides XL I'm never sure what the hell I'm dealing with.
Writing a good polyfill for this is harder than you might think, because the browser by default does lazy decoding of JPEG images to avoid holding too many buffers in RAM. Naive attempts (niutech/jxl.js) at JXL polyfills tend to crash browser tabs by using too much memory if they contain a few dozen megapixels of images, while Chrome normally can handle hundreds of megapixels of JPEGs on a page without difficulty.
Unless something changed recently, Firefox never dropped support. It's always been behind a feature flag though. Google gave some reasons on their issue tracker.
For Google, focusing on one next gen image codec is the basic reason.
A perfectly reasonable decision if made by a committee of disinterested parties deciding the future of the web for the benefit of humanity, but a bad look when pulled by a megacorp.
For Firefox, it's about hoping to get to that latter situation. They want the web to move forward in step as much as possible rather than it being some wild west where everyone just does their own thing. They've lived through that already. And it sucked. Hence the various standardisation and community interoperability measures which have radically improved the web platform over the last decade or so.
Might have to do with https://flak.tedunangst.com/post/on-building-jpeg-xl-for-was...
libjxl being a big C++ pain to build and maybe needing some serious audit/fuzzing. Still, AVIF isn't that much better except libavif being C (libheif is C++ though, and it's probably what's in use).
From what I remember, Firefox was basically neutral on it, and didn't want to extend resources to it if it wasn't adopted at wide scale. Chrome basically said that there wasn't much interest so they weren't going to put resources towards it.
Firefox still supports it - there are even pending patches for animation support. Unless it has recently changed, it’s only available on nightly builds + a preference toggle.
I pulled all the patches in to Waterfox and have had full support for JPEG XL for nearly a year now.
> JPEG-XL is newest image format and Gumlet is first cloud provider to support it.
This is completely untrue. Cloudinary have supported it since 2020, which is where Jon Sneyers, the chair for the JPEG XL WG and lead developer of libjxl works.
Shame on you for making false claims - especially considering your "service" is in direct competition to cloudinary.
Founder here. We are first cloud service to do it "automatically" based on client support. Most services don't have this and those who have it, don't do it automatically and requires code change.
The "automatic support" you mention there is for the AI optimization, which decides which format to use based on the image content. It is unrelated to the client detection.
Now... Do we need JPEG-XL in the first place? What's the benefit? Hardware is fast enough to decode JPEG, also storage is cheap enough to stick to JPEG. What's the point in using JPEG-XL if you're not running an imageboard or whatnot?!
edit: Happy Birthday, JPEG! We also got 30 years of integration, optimization and everything else for that image format, too.
Don't ignore the fact that unlike WebP (lossless: missing grayscale | lossy: forced 4:2:0 chroma subsampling =x | both: missing metadata and progressive) and AVIF (lossless: worse than PNG in many cases, i.e. useless), it's both a very good PNG and JPEG replacement; only AVIF wins in my books are very low bpp lossy quality (not very interesting), hardware decoding support and dav1d being amazing.
I seriously see JXL as a "keep it for 50 years" codec to replace our aging tech.
WebP lossless as a format has two or three efficient ways of encoding grayscale (subtract green mode, palette, perhaps cross color decorrelation would work, too). Perhaps the libwebp APIs don't make it obvious. Density should be quite a bit better than with PNG.
> High quality progressive decoding at reduced filesizes is a big positive for me.
This is really cool!
Honestly, I want to use regular progressive JPEGs for a current project of mine, but it seems that even that doesn't have support in all the tech stacks yet despite how long it's been around for, for example: https://github.com/SixLabors/ImageSharp/issues/449
Here's hoping that in the case of JPEG-XL this will be more commonplace! In combination with loading="lazy", it would surely make the experience of scrolling through modern sites with lots of content a bit less data intensive.
yeah, you are very much at the mercy of the libs you use if you want full feature parity with, for example, libjpeg-turbo in the case of JPEG.
In the browser where supported, I guess loading="lazy" probably already works (i havent tried). I think a more advanced version where you can choose maybe a "staged" loading type, or some mechanism to choose the pass/frame to pause the network activity at so that you can control it further via JS would be nice. At a minimum it would be a feature enabling a preview and a reduced download for the full version. I can see many usecases where that could be useful.
Oh yeah, definitely. For me the usecase is infinite scrolling, where you buffering offscreen images to allow you to continually flick through an image feed without having to "wait" for a preview to appear. Although you can do this without JXL, its a much more fluid experience with the progressive download - especially if you micromanage the behaviour of the network requests (i.e. progressive throttling of offscreen images past first frame). Lower filesize over JPEG is also a benefit there as you can more quickly resume the load of the higher fidelity frames when in view.
Current browser support where available is a lot more restrictive TBH, as you dont have any control over that behaviour. But outside the browser its solid gold.
https://storage.googleapis.com/avif-comparison/index.html
https://cloudinary.com/blog/contemplating-codec-comparisons