I still don't understand how Microsoft lets standby remain broken. I can never leave the PC in my bedroom ij standby because it will randomly wake up and blast the coolers.
Consumers _do not care_ if it is the firmware or Windows.
Dell was one of the earlier brands, and biggest, to suffer these standby problems. Dell has blamed MS and MS has blamed Dell, and neither has been in any hurry to resolve the issues.
I still can't put my laptop in my backpack without shutting it down, and as a hybrid worker, having to tear down and spin up my application context every other day is not productive.
Yeah I hear you. One of the reasons I’m still inclined towards Mac laptops for “daily drivers” is precisely because it’s disruptive to have to do a full shutdown that obliterates my whole workspace. Other manufacturers can be fine for single-use machines (e.g. a study laptop that only ever has Anki and maybe a browser and music app open), but every step beyond that brings increased friction.
Maybe the most tragic part is that this drags down Linux and plagues it with these hardware rooted sleep issues too.
S3 sleep was a solved problem until Microsoft decided that your laptop must download ads^Wsuggestions in the background and deprecated it. On firmwares still supporting S3, it works perfectly.
Sleep used to work perfectly fine up until, I don't know, 10 years ago. I doubt hardware/firmware/BIOS got worse since then, this is 100% a Microsoft problem.
Sadly even if Microsoft had a few lineups of laptops that they'd use internally and recommend, companies would still get the shitty ones, if it saves them $10 per device.
Binaries or source, it's pretty much the same unless you thoroughly vet the entire source code. Malicious code isn't advertised and commented and found by looking at a couple of functions. It's carefully hidden and obfuscated.
TIL: AVIF can be nicely embedded and looped like a gif. That's awesome. With most of the video embedding I've seen so far, the issue has always been that it was embedded as a video player.
Recently implemented a fixed-size memory pool with spinlocks and now I'm wondering - how would one implement them without a spinlock?
Edit: Maybe I'm confusing terminology. What I'm doing is looping until other threads returned memory, but I'm also doing a short sleep during each loop iteration.
Bibtex are often also incorrectly generated. E.g., google scholar sometimes puts the names of the editors instead of the authors into the bibtex entry.
That's not happening for a similar reason people do not bug-check every single line of every single third-party library in their code. It's a chore that costs valuable time that you can instead spend on getting the actual stuff done. What's really important is that the scientific contribution is 100% correct and solid. For the references, the "good enough" paradigm applies. They mustn't be complete bogus, like the referenced work not existing at all which would indicate that the authors didnt even look at the reference. But minor issues like typos or rare issues with wrong authors can happen.
To be honest, validating bibliographies does not cost valuable time. Every research group will have their own bibtex file to which every paper the group ever cited is added.
Typically when you add it you get the info from another paper or copy the bibtex entry from Google scholar, but it's really at most 10 minutes work, more likely 2-5. Every paper might have 5-10 new entries in the bibliography, so that's 1 hour or less of work?
I don't think the original comment was saying this isn't a problem but that flagging it as a hallucination from an LLM is a much more serious allegation. In this case, it also seems like it was done to market a paid product which makes the collateral damage less tolerable in my opinion.
> Papers should be carefully crafted, not churned out.
I think you can say the same thing for code and yet, even with code review, bugs slip by. People aren't perfect and problems happen. Trying to prevent 100% of problems is usually a bad cost/benefit trade-off.
What's the benefit to society of making sure that academics waste even more of their valuable hours verifying that Google Scholar did not include extraneous authors in some citation which is barely even relevant to their work? With search engines being as good as they are, it's not like we can't easily find that paper anyway.
The entire idea of super-detailed citations is itself quite outdated in my view. Sure, citing the work you rely on is important, but that could be done just as well via hyperlinks. It's not like anybody (exclusively) relies on printed versions any more.
You want the content of the paper to be carefully crafted. Bibtex entries are the sort of thing you want people to copy and paste from a trusted source, as they can be difficult to do consistently correctly.
Oh nice, that's good to know. Yes, those clones sites are also instantly on my block list, as well as Userbenchmark, sites with AI-generated "info" pages (if I want AI answers, I'll just ask ChatGPT), sites that won't work without third-part cookies, low-quality game guide sites that were evidently made for users to visit, but not actually to help them, etc.
Same here! Easily jumping between files is one of the best features. I always have VS and vscode open simultaneously, doing about 99% of the work in vscode and only using VS to compile and to debug.
I've also used Eclipse in the past but almost exclusively used vscode in recent years. It's just a phenomenal text editor. It's got fantastic multi-line selection and editing tools and searching for files is instant and you don't even need to be fully accurate with the filename. Nowadays I hardly ever use the sidebar to look for the file, I just type thr ctrl+e shortcut and insert several letters of the file and I instantly get the result. It's a small thing with a huge impact. VS, for comparison, lags a few seconds when searching files, and it misses files that are not imported into the workspace. That difference makes VS useless to me.
reply