Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

JPEG XL's XYB color space is perceptual and based on LMS, but you don't have to use it, and you can store 16-bit floats directly. The paper notes that the libjxl library interface lacks some necessary features:

"In principle, JPEG XL supports having one main image and up to 255 sub-images, which sounds like a good match for c0 and f1, . . . , fn−1. Unfortunately, the current implementation in libjxl does not allow us to tweak the compression ratio and subsampling on a per-sub-image basis. Due to these limitations, we currently use one JPEG XL file per channel so that we have full control over the compression parameters."

This follows a general trend in modern codecs where the format itself allows for many different tools, and the job of the encoder is to make good use of them. See "Encoder Coding Tool Selection Guideline" for a nice chart of the possibilities: https://ds.jpeg.org/whitepapers/jpeg-xl-whitepaper.pdf



I see. It turns out I had a misunderstanding about some of the details, including that libjxl is responsible for a lot of stuff that I thought was inherent to the format.

It does seem a bit weird to me that we're going to end up with image files being more like video container formats where you need the appropriate codec available in order to decode them. But I suppose when the use cases are so widely varied it was probably inevitable.

Maybe we should just cut to the chase and standardize codecs that fit inside of mp4 or mkv for all media, including still images, audio, everything. I'm only half joking - it feels like where this is headed.


I believe JPEG XL allows different scaling per layer, including decent default interpolation.

But usually exotic things are easier to engineer if you do them outside of the container, then you don't need to figure out how the standard works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: