Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You already have to install software that inserts itself into the audio stack and introduces extra latency to be able to use your wireless headphones. Those are called "Bluetooth drivers".

In any case, I run processing on my computer to apply a convolution filter on my audio stack with adds around 2.3ms of latency and uses 0.15% of my CPU. Compared to the ~200ms of Bluetooth latency it's completely unnoticeable, and I'm sure Apple can figure this out better than I can.



Hardly the same - they are generic, provided by the OS, follow a standard, don't require specific support to be provided for individual devices.


They do need some pretty wide support per individual device. A proper stack requires something like 6 different codecs, all of which are quite heavy and very different.

In any case, the transformations are not generic at all. You basically need to do one convolution, that's it. The headphones can provide the impulse response with which to convolve at pairing.

For things from Apple, of course, this is trivial.


Yes, basically Bluetooth is already almost too complicated to do well, and you would rather it were more complicated still? And like you say applying a convolution is computationally trivial especially compared to decoding a lossy audio codec and running a Bluetooth stack and antenna, which the headphones are already doing. Offloading this to the device is going to make absolutely no difference to battery life while increasing complexity and unreliability, while also restricting what processing can be done to static convolution with an impulse response. There is no reason to do this on the device.


It certainly would make a difference to battery life. Bluetooth connections as well as decoding are done in hardware using commodity chips that can't do much else. Adding additional audio processing hardware will increase the complexity of these chips which translates to higher prices and lower battery efficiency.

Remember, some of those devices have 20mA of battery. The codecs already have to be made easy to decode.

There is also absolutely no need to limiting processing to a static convolution with an impulse response. That's just the only device-specific processing you have to do. Headphones are minimum phase devices, so except for things like distortion, they can basically be described by their single impulse response.

For the rest, like spatial audio, or EQ, or anything of the sort, there is no need to do it per headphone, it's the same for all of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: