I recently changed my mind on this: being able to use phone numbers as identifiers also has some benefits: Last time I was on vacation, it was easy to get in touch with service providers (e.g. surf school, massage place) via WhatsApp, simply because I had their number, and everyone used WhatsApp anyways. If Signal would be more popular, I could instead contact them directly via Signal, since I only need their phone number. It allows for a more seamless transition.
Matrix OTOH would have a harder time getting adopted: businesses would need to update their websites, facebook pages and flyers to add the matrix contact info. Which they wouldn't do until Matrix is more popular. Which is a chicken and egg problem.
Matrix still has 3pids, including phone number and email.
The most common client, Element, will by default ask you for your phone number when registering with the default matrix.org homeserver and the default vector.im identity server will associate your phone number with your matrix user.
So by default (changing servers and/or opting out is easy and encouraged BTW), Matrix already works like you would prefer.
This is BTW a main critique in the OP since it makes vector.im a PII and metadata aggregator.
> > The first part is true of all statistical models: To improve performance by a factor of k, at least k^2 more data points must be used to train the model. The second part of the computational cost comes explicitly from overparameterization. Once accounted for, this yields a total computational cost for improvement of at least k4.
Those claims are entirely new to me, and I've been a researcher in the field for almost 10 years. Where do they come from/what theorems are they based on? It's unfortunate this article doesn't have any citations.
But it only applies to estimation (like how well a population parameter can be estimated) in certain regimes, and not e.g. expected risk (like how well one can do at prediction), so I’m not sure how it would apply here.
Those are exciting news. Both for Rust and for the project -- I think that's a really cool direction. The web becomes more and more a commodity to execute code remotely, and I think this step will help leverage Website-as-Interface as the default GUI for our programs, and empower us to commoditize it even more. And the Linux Foundation is a very lovely home -- they're dedicated to open standards and great stewards. However, complete lack of talk about funding and "every bit helps" makes it sound like this is actually the death of commercial support for this project. And I think it's super sad that Mozilla can't fund it anymore and would've hoped someone else would have picked up the banner monetarily. Does anyone know if the project has a good chance of survival (Rust is a great dev community, but might alienate broader adoption) -- can this survive purely within the open source community?
I thought that's exactly the point of Scientific Reports: to publish "reports" (i.e., not proper "papers") without caring about their impact or importance, as long as the science (however little of it there is) is done properly. i.e., you literally send them stuff that isn't good enough to be published in a proper journal. It's meant to be a dreg journal with low reputation; it's a feature, not a bug. The idea is to allow people to publish their work (e.g. to finish their PhD), but everyone knows that further investigation is needed before stuff should be called a proper finding. I think of it as the "workshop" version of a journal.
No, it definitely isn't. Why is audio quality so much worse when I connect my headphones as headset instead of as audio-sink (even when I'm not speaking). Why is it so hard to connect two audio sinks to one source? Why is the experience of using the same sink on multiple devices still not smooth?
I also don't get why this was not fixed yet. There is clearly the need for high quality Bluetooth headsets.
Two sinks to one source works well for some devices. But this is more an edge case for most people.
I also have the feeling, that Apple AirPods have a better quality if you connect them to an iPhone, then any other Bluetooth headsets. Do they use a proprietary Handsfree profile?
My understanding is that music gets decompressed on the Apple headphones, the original AAC stream is sent over the wire. Most Bluetooth headphones will just effectively transcode the audio, such that for example AAC gets decompressed then recompressed into AptX/LDAC. For music listening, it's completely suboptimal. I use FLAC sources for my Sony headphones which makes them very listenable (along with copious EQ to tame the +5db excess bass). Bluetooth headphones have a lot of room for improvement.
> My understanding is that music gets decompressed on the Apple headphones, the original AAC stream is sent over the wire.
That's claimed at many places, but this one claims the opposite: https://habr.com/en/post/456182/, that everything gets (re-)compressed into AAC on the sending device before sending it to the Airpods, in order to mux other audio events. In principle, with end-to-end Apple hardware, they could send multiple streams and leave the muxing to the Airpods, but I don't think anyone has conclusively shown if this is actually done.
If there's one thing I've learned about technical claims about Apple hardware, it's that they are not to be believed unless they come straight from Apple in a precise and unambiguous statement, or from a reverse engineer/hacker who has looked at the code/protocols.
The reality distortion field is just as strong as always when it comes to technical details too. People will make random things up to prove that Apple is different or (more) special.
Claims of poor Bluetooth audio quality generally fall into the following categories:
* Bad source settings/implementation (i.e. bitrate too low)
* Bad sink implementation/EQ/DAC
* The HSP issue referenced by OP (where you can't have both HQ audio and use the mic)
* FUD by patented codec authors
The reality is that even the basic royalty free Bluetooth SBC codec is perfectly fine, and sounds transparent at high bitrate settings, which decent sources should be using and all sinks are required to support. Transcoding doesn't make much of a difference either. It's a poor codec, but the bitrate is high enough that it doesn't matter. You can ABX test it yourself if you're so inclined, purely in software, with high quality wired audio hardware. I have. You'll see the codec isn't the problem.
So when your Bluetooth device sounds better wired than wireless, or works better with AptX or some other patented nonsense codec - most likely the problem is careless software/firmware (or outright crippling to push patent licenses), not the spec being bad.
(I say this as someone who was gotten into flamewars over the quality of ffmpeg's AAC implementation and found bugs in the Opus reference encoder; I can tell when audio quality drops)
My personal lived, yet anecdotal experience is with a fairly decent Kenwood head unit in my car. I'll often use bluetooth for convenience but from time to time I'll connect over USB. Whenever I make the direct USB connection the sound quality is always noticeably better: rounder deeper sounds, much better quiet and loud sounds ...
This leads me to conclude that one or other of the following must be the case:
- The iphone DAC is x10^9s better than the Kenwood DAC
- Kenwood and iPhone have negotiated a poor codec
> most likely the problem is careless software/firmware
So, don't take this wrong, but I'm going to wince and say this has a shade of a no true scotsman argument to it ... and I say this because I think that this is crux of what's wrong with Bluetooth. It is a very closed technology, and it's very hard to get a leg-up on the standardisation or how to use it for your own devices if you want to do anything any way commercial.
I'm saying this as a Telecomms guy, who is used to having high quality standards documentation, and reference implementations for just about everything I do. Interoperability is key. Though Bluetooth is a communications standard, it seems to have been influenced more by consumer electronics than comms.
After 10-15 years or so of using Bluetooth, it increasingly feels like a technology not really developed in good faith. You get these clunky imprecise results that I and other people report all the time. Working with a bluetooth device you're always going to be throwing the dice vs the certainty of plain old wired connections. It's plain to see that right across the industry people are hedging against it, and anybody that does require reliable M2M short-wave radio is using a proprietary protocol.
Bluetooth is a millstone around our necks. Yes it gives you freedom from wires and a certain limited amount of interoperability, but you don't get a huge amount that you wouldn't have got vs proprietary radio technologies.
I'm not saying Bluetooth audio doesn't suck in practice. I'm saying it's not the codec/technology's fault, but everyone assumes it is, and that is exactly what all the companies peddling patented alternate codecs want you to think.
I have a set of Bluetooth speakers, and they sound better over the wired aux in than wireless. I know what this experience is like, and this is why I have tested the codec myself and concluded that it wasn't the problem. And since Bluetooth is a digital audio standard, if the codec isn't the problem and the data is arriving at the destination, then clearly any quality problem is the destination's fault.
Don't get me wrong, Bluetooth is a horrible standard for many other reasons (it's worse than USB, and that's saying something); I've implemented Bluetooth-related protocols. But "screws up audio quality" isn't one of them.
I’m saying that it’s so vulnerable to codecs, “is” the flaw with Bluetooth. I’m happy you found a combination that works for you ... but what is the point of Bluetooth exactly if you can’t have good interoperability.
In fact, I wouldn’t be surprised if this issue was created just so there could be a “marketplace” for codec pedlars ...
> Transcoding doesn't make much of a difference either.
I respectfully disagree. Transcoding does make an audible difference, at the bitrates typically used, and should have no place in "high fidelity" audio. My issue with these Bluetooth codecs is that they are not used at the sources and so will always be used in practice for transcoding.
Have you ABX tested an SBC transcode at the maximum bitpool settings vs the original? Because I have, and I couldn't tell the difference.
The "transcoding is bad" story is about low quality settings, archival, and/or bad encoders. E.g. don't do repeated transcodes with the ffmpeg AAC encoder, not even at 320kbps. But one final transcode with SBC for over-the-air delivery at the max allowed bitrate? That's totally fine, especially for typical Bluetooth use cases (listening on the go, casual headphones, etc).
Well, I have noticed significant degradation of 128kbps MP3 sources when sent over Bluetooth, whereas my FLAC CD rips sound fine. I don't think that the effects of transcoding are well understood or well-studied, especially the interaction between different codecs.
And why does latency compensation still not work properly? For real time stuff like games or video chat it's tough to compensate for latency, but when watching a prerecorded video there's no excuse. Video and audio should never be out of sync, and yet I find they usually still are.
I have zero interest in using my wireless headphones as a headset, ever. And yet, software will randomly manage to trigger it and kick me off of the A2DP profile.
Microsoft Teams is the worst at this. It completely ignores my system audio preferences (which were painstakingly configured to use the expensive microphone as an input, not the wireless headphones) and tries to use the headphones, which switches the profile to headset and ruins the audio quality for the remainder of the call.
I guess because it switches to a mode that favors latency over sound quality?
I tried using bluetooth headphones on a digital drumkit once, which went about as well as you'd expect. (mind you there was a laptop in between as well)
It somehow works with dedicated USB dongles that are size of a penny. Look at Jabra Evolve 65T. From my perspective as a customer it's an issue with Bluetooth standard, not hardware.
The latency problems are somehow solved when using USB adapters, so why can't we embed those adapters in our devices?
The fact that high quality simultaneous audio input/output isn't possible is a big negative for BT that I just can't believe hasn't been addressed yet.
Headset voice has to squeeze over GSM voice channel, that is 9.6kbps.
I had pretty smooth experience with using Pulse with multiple sinks/sources, maybe look into doc. Only problem I had was latency sync while multiple sinks play together.
A lot, I guess! As I discover new things everyday in Ruby. And with Ruby 3 that'll be released at the end of the year.. But I agree that I won't go to the volume 20 hehe,
So that's an order of magnitude more than what OP suggested and one less than what I suggested. But I still think that 100 is a more realistic estimate to get stellar results.
> The efficiency of an estimator is a different from its bias.
I think the comment is drawing a parallel to variance (better efficiency = lower variance). Still not exactly the same, I think, but pretty damn similar.
They're related in that less complex models will degrade more gracefully when making predictions on novel anomalies, and that in general model complexity drives the bias-variance trade-off.
But, erring on the side of efficiency in this discussion is more like over-fitting, which implies an overly complex model. It's making your model too good for one situation, such that it fails to generalize. You'd rather pull back on accuracy and choose a simpler model, in the hopes that it's more resilient to novel observations.
Did I despise MS od the 90s and their business tactics? Sure. But wether he pays taxes on his philanthropic endeavors doesn't really matter, what matters is that they exist and do some good in the world -- and certainly, spending money on health care is good in my book. So what makes him a "dark character" in your eyes?
Instead of donating to existing public health (and education) organizations, OS programmer Gates created his own so that his personal biases wouldn't be challenged.
@BeatLeJuce You don't have to go very far online to find the answers to you question. James Corbett has amassed a formidable array of facts and history, for example.
Matrix OTOH would have a harder time getting adopted: businesses would need to update their websites, facebook pages and flyers to add the matrix contact info. Which they wouldn't do until Matrix is more popular. Which is a chicken and egg problem.