Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
First automatic JPEG-XL cloud service (gumlet.com)
89 points by adityapatadia on Sept 20, 2023 | hide | past | favorite | 63 comments



From my own tests. JXL is better for photos and competitive with AVIF for images with flat areas of color (PNG-like). JXL is about the same speed as AVIF but WebP is quicker than both.


Is this your test, just in case? https://ache.one/articles/web-image-formats

I'm not very sure how you have compared the speed. `cjxl -e 9` for example is known to be very slow because it is AFAIK basically `-e 8` with a brute-force parameter search (and I think this level should be hidden in some way for this exact reason, it is very misleading). Also I assume that you have done an end-to-end command line test with relatively small inputs, which might be a major pitfall (because it doesn't only measure the PNG encoder/decoder but also all program invocation time and finalization time, which can be specficially optimized [1]).

Also only relying on a single objective metric might be misleading, and Jon Sneyers (one of JPEG XL devs) has a very good plot that illustrates this [2] and pointed out that actual images have to be also manually reviewed to reduce mistakes. Many comparisons are made against images with similar SSIM scores, but it is unclear if they indeed had similar qualities. And an ideal perceptual metric is not enough, specifics on "taking the AVIF format points as a basis and comparing them with the similarity and similar ratio points" would be necessary to evaluate the test.

[1] For example you can skip `free` on complicated structures because they will be reclaimed by the kernel anyway. Of course this is totally unacceptable for libraries.

[2] https://twitter.com/jonsneyers/status/1560924579719258113 (The ideal metric should strongly correlate to the human score and should not distort the relative score difference. Most metrics---except for SSIMULACRA 2 which specifically being developed with this dataset---are bad at both.)


JXL is same speed as AVIF ? but what everyone has been commenting is that AVIF is 100x slow


Encoding AVIF is very slow for large images.

Instead of using a breakpoint style approach with several predefined sizes some site generate images related to viewport size at very fine granularity and even changing the viewport by 1px will cause the AVIF to be regenerated

In these cases you can notice services like Cloudinary take several seconds to generate the new variant if it’s a large image


if I understood well, AVIF encoder is quite fast for ugly qualities and very slow for standard and higher quality. https://cloudinary.com/blog/contemplating-codec-comparisons#...


Every time I see JPEG XL I always get my wires crossed with JPEG 2000... as in, "Isn't that that weird format I had to use Irfanview to decode?"


I always confuse it with JBIG2, infamous from NSO's Pegasus zero-click iMessage exploit:

> JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.

> The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream.

https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...


I imagine writing that exploit was extremely satisfying.


I’m glad most people use their powers for good. This is beyond me.

Incredible programming.


If you are curious, here is how JPEG-XL adoption looks like after 42 hours of iOS 17 launch: https://twitter.com/adityapatadia/status/1704407864326939043


What does that show? Adoption where? By whom?


I guess adoption of browsers capable of displaying JPEG-XL as part of the overall browser landscape


Out of total requests we have served in last 1 minute, this was percent of response was JPEG-XL. This is across all of our customer base.


IOS 17 also added HEIF/HEIC support to their browers, which IMO, is HUGE.


We also support HEIF conversion with `format=heif` parameter. It's not automatic though.


And HEIF are automatically converted to JXL for iOS 17?


Yes!


I’m not clear on the benefit of doing that. Won’t you end up with worse quality from the transcode and have equivalent file size?


Do give it a try! In our tests we found that sizes are smaller and encoding of HEIF takes very very long time. Remember, we provide bunch of other things like image resize and crop etc so we always end up encoding image again. Encoding in HEIF is very slow.


Anyone know the one-liner to losslessly recompress an existing JPEG as a JPEG-XL? That was one of the most interesting features of the format to me.


Iirc, cjxl is smart enough to do it automatically.

cjxl input.jpg output.jxl

The cjxl --help and cjxl -v -v --help pages are very well done and you'll find the option to explicitly set the option on if you wish to do so.


https://github.com/libjxl/libjxl#usage

> Specifically for JPEG files, the default cjxl behavior is to apply lossless recompression and the default djxl behavior is to reconstruct the original JPEG file (when the extension of the output file is .jpg).


JPEG-XL is a fantastic format but it has a big marketing/naming problem. I think the the lack of popularity/adoption is in big parts due to the confusing name.


I've never been confused about it, not sure what the issue is. There's JPEG, and there's JPEG XL.

Perhaps they should have called it JPEG 2.0.


It sucks that there are several other combinations of 2 letters suffixes that are also taken:

JPEG XR (Windows Media Photo, hdr), JPEG XS (a "realtime" one) and JPEG XT (a "very backwards compatible" one). So besides XL I'm never sure what the hell I'm dealing with.


Unfortunately the authors don't think that is the case. I pointed this out in 2019 / 2020 already.


Can't wait for my cameras to output jpgxl files natively!

It's probably at least 5 years away though...


For other platforms it should check to see if it's more efficient to serve a JPEGXL WASM polyfill


Writing a good polyfill for this is harder than you might think, because the browser by default does lazy decoding of JPEG images to avoid holding too many buffers in RAM. Naive attempts (niutech/jxl.js) at JXL polyfills tend to crash browser tabs by using too much memory if they contain a few dozen megapixels of images, while Chrome normally can handle hundreds of megapixels of JPEGs on a page without difficulty.


Relying on a polyfill means the browsers preloader won’t fetch the image early

Can add an explicit preload hint but it’s additional complexity


Preview.app in Sonoma also supports now JPEGXL. Its also part of the Exportoptions.


Does anyone know of a Go library for encoding JPEG-XL files?


Any insight on why Firefox and Chrome dropped it, other than how disappointing it felt? What was their reason?


Unless something changed recently, Firefox never dropped support. It's always been behind a feature flag though. Google gave some reasons on their issue tracker.

https://bugs.chromium.org/p/chromium/issues/detail?id=117805...


For Google, focusing on one next gen image codec is the basic reason.

A perfectly reasonable decision if made by a committee of disinterested parties deciding the future of the web for the benefit of humanity, but a bad look when pulled by a megacorp.

For Firefox, it's about hoping to get to that latter situation. They want the web to move forward in step as much as possible rather than it being some wild west where everyone just does their own thing. They've lived through that already. And it sucked. Hence the various standardisation and community interoperability measures which have radically improved the web platform over the last decade or so.


Might have to do with https://flak.tedunangst.com/post/on-building-jpeg-xl-for-was... libjxl being a big C++ pain to build and maybe needing some serious audit/fuzzing. Still, AVIF isn't that much better except libavif being C (libheif is C++ though, and it's probably what's in use).


From what I remember, Firefox was basically neutral on it, and didn't want to extend resources to it if it wasn't adopted at wide scale. Chrome basically said that there wasn't much interest so they weren't going to put resources towards it.


Firefox still supports it - there are even pending patches for animation support. Unless it has recently changed, it’s only available on nightly builds + a preference toggle.

I pulled all the patches in to Waterfox and have had full support for JPEG XL for nearly a year now.


> JPEG-XL is newest image format and Gumlet is first cloud provider to support it.

This is completely untrue. Cloudinary have supported it since 2020, which is where Jon Sneyers, the chair for the JPEG XL WG and lead developer of libjxl works.

Shame on you for making false claims - especially considering your "service" is in direct competition to cloudinary.


Also nearly all Apple devices should now be able to also convert JPEG-XL: https://www.theregister.com/2023/06/07/apple_safari_jpeg_xl/

>Google snubbed JPEG XL so of course Apple now supports it in Safari


The iPhone has something like an 87% market-dominating position with teens. Every single one has Safari preinstalled.

The "hedge" angle on doing what google isn't doing is a fascinating possibility.

Could it be that, paradoxically, Google actually inadvertently secured JPEG-XL's future by ripping it out of Chrome?


Founder here. We are first cloud service to do it "automatically" based on client support. Most services don't have this and those who have it, don't do it automatically and requires code change.


That is exactly what cloudinary does as part of their optimized delivery.

https://cloudinary.com/documentation/image_optimization#auto...

Also, your claim is that you are "first to support", not "first to serve all our clients images as jxl without asking if thats ok"...


It still needs `f_jxl` in parameter. Automatic support has still not launched as per this article's last paragraph: https://cloudinary.com/blog/jpeg-xl-how-it-started-how-its-g...


The "automatic support" you mention there is for the AI optimization, which decides which format to use based on the image content. It is unrelated to the client detection.

You dont need to use the f_jxl parameter. e.g. https://res.cloudinary.com/demo/image/upload/c_scale,w_500/f...

And can all be handled via srcset.


Probably never heard of them, but https://www.peakhour.io have had automatic support for it based on the accepts header for about a year.


It seems the poster isn't interested. I just checked in to see if they had "corrected" their article, and it seems not :(


Cloudinary is providing it now but, for the moment, you have to request that it be added when you use the `f_auto` ("auto format") transformation.


Now... Do we need JPEG-XL in the first place? What's the benefit? Hardware is fast enough to decode JPEG, also storage is cheap enough to stick to JPEG. What's the point in using JPEG-XL if you're not running an imageboard or whatnot?!

edit: Happy Birthday, JPEG! We also got 30 years of integration, optimization and everything else for that image format, too.


Don't ignore the fact that unlike WebP (lossless: missing grayscale | lossy: forced 4:2:0 chroma subsampling =x | both: missing metadata and progressive) and AVIF (lossless: worse than PNG in many cases, i.e. useless), it's both a very good PNG and JPEG replacement; only AVIF wins in my books are very low bpp lossy quality (not very interesting), hardware decoding support and dav1d being amazing.

I seriously see JXL as a "keep it for 50 years" codec to replace our aging tech.


WebP lossless as a format has two or three efficient ways of encoding grayscale (subtract green mode, palette, perhaps cross color decorrelation would work, too). Perhaps the libwebp APIs don't make it obvious. Density should be quite a bit better than with PNG.


Indeed, that wasn't obvious it was using tricks to improve the situation. But I do remember in my tests PNG beating Webp when using gray8 input.

I mean, Webp must store an RGBA tuple for each pixel, even for grayscale without alpha, right?


High quality progressive decoding at reduced filesizes is a big positive for me. There is no other format that supports that.

https://www.youtube.com/watch?v=UphN1_7nP8U


> High quality progressive decoding at reduced filesizes is a big positive for me.

This is really cool!

Honestly, I want to use regular progressive JPEGs for a current project of mine, but it seems that even that doesn't have support in all the tech stacks yet despite how long it's been around for, for example: https://github.com/SixLabors/ImageSharp/issues/449

Here's hoping that in the case of JPEG-XL this will be more commonplace! In combination with loading="lazy", it would surely make the experience of scrolling through modern sites with lots of content a bit less data intensive.


yeah, you are very much at the mercy of the libs you use if you want full feature parity with, for example, libjpeg-turbo in the case of JPEG.

In the browser where supported, I guess loading="lazy" probably already works (i havent tried). I think a more advanced version where you can choose maybe a "staged" loading type, or some mechanism to choose the pass/frame to pause the network activity at so that you can control it further via JS would be nice. At a minimum it would be a feature enabling a preview and a reduced download for the full version. I can see many usecases where that could be useful.


Ok, that looks better than Progressive JPEG, I'll give you that. ;-)

On the other hand: When your connection speed is 30 kb/s, images are probably the last problem with "modern" websites... ;-)


Oh yeah, definitely. For me the usecase is infinite scrolling, where you buffering offscreen images to allow you to continually flick through an image feed without having to "wait" for a preview to appear. Although you can do this without JXL, its a much more fluid experience with the progressive download - especially if you micromanage the behaviour of the network requests (i.e. progressive throttling of offscreen images past first frame). Lower filesize over JPEG is also a benefit there as you can more quickly resume the load of the higher fidelity frames when in view.

Current browser support where available is a lot more restrictive TBH, as you dont have any control over that behaviour. But outside the browser its solid gold.


The biggest advantage I can think about is transparency support.


Ok, but we got PNG for that. Photos usually don't contain transparencies, so why have several Megapixel/Megabyte large PNGs?


Because JPEG XL has smaller file sizes, which saves you money in the era of Managed NAT Gateway's 7 cents per gigabyte egressed.


Why not have one format good for everything image? There's zero reason to not have it now. Would you say 'We got GIF for that' if asked for animation?


I think animation is also part of the JXL spec, although may not be part of the reference implementation yet =)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: