Registry keys and autoattend.xml config keys are not clever people finding a way, it's people using stuff Microsoft put there to do just this for now. I.e. Windows 11 has not been strictly enforcing these yet, they are just "officially" requirements so when they eventually decide to enforce in a newer version (be it an 11 update or some other number) they'll then be able to say "well it's really been an official requirement for many years now, and over 99% of Windows 11 installs which has been the only supported OS for a while now are working that way" at that time. If they just went straight from Windows 10 to strictly enforced Windows 11 options it'd've been harder to defend.
Half the reason (literally) the address looks so bad is not because of IPv6 but because everyone keeps choosing to implement randomized in-subnet addresses and cycle through them for privacy reasons.
E.g. 2600:15a3:7020:4c51::52/64 is not too horrible but 2600:15a3:7020:4c51:3268:b4c4:dd7b:789/64 is a monster by unrelated intent of the client.
This is pretty much on the money. IPv6 addressing can be pretty simple if you design your subnets and use low numbers for hosts. But hosts themselves will forgo that and randomly generate 64 bit random host addresses for themselves - some times for every new connection. Now you have thousands of IPv6 addresses for a single computer speaking out to the Internet.
"Modern" tooling in the consumer space is pretty dire for IPv6 support too. The best you can reasonably get is an IPv6 on the WAN side and then just IPv4 for everything local. At least from the popular routers I've experienced lately.
I’ve been amazed for years at the fact that many of the best routers turn V6 off by default.
Of course I know why. If you turn it on it slightly increases edge case issues as complexity always does. Most people don’t actively need it so nobody notices.
Yes, I forgot about SLAAC and worthless privacy extensions.
Privacy extensions are worthless because there are just sooooo many ways to fingerprint and track you. If you are not at least using a VPN and a jailed privacy mode browser at a bare minimum, you are toast. If you’re serious about privacy you have to use stuff like Tor.
V6 privacy extensions are like the GDPR cookie nonsense: ineffective countermeasures with annoying side effects.
SLAAC sucks too. They should have left assignment up to admins or higher level protocols like with V4. It’s better that way.
Most people are just using the ISP provided router as their gateway today anyways. E.g. ATT fiber is proud to advertise to you that it knows about each of your devices on the ONT+Router combo - that's even the only way to set up a port forward (you can't just type in an IP, you have to pick a discovered device).
"But people can NAT the v4 with another router to hide it!" -> sure, and the same crappy solution works with v6.
"But at least prosumers can replace the ONT via cloning the identifiers and certain hardware" -> also no change with v6.
Randomized addresses do have valid use cases though, particularly when connecting to Wi-Fi networks other than your own when set to randomize the MAC per connection (not just the scanning MAC) as well, but I'm just not really convinced this is a realistic example as framed.
The counterpoint made above is while what you describe is indeed the way the author likes to see it that doesn't explain why "an error is something which failed that the program was unable to fix automatically" is supposed to be any less valid a way to see it. I.e. should error be defined as "the program was unable to complete the task you told it to do" or only "things which could have worked but you need to explicitly change something locally".
I don't even know how to say whether these definitions are right or wrong, it's just whatever you feel like it should be. The important thing is what your program logs should be documented somewhere, the next most important thing is that your log levels are self consistent and follow some sort of logic, and that I would have done it exactly the same is not really important.
At the end of the day, this is just bikeshedding about how to collapse ultra specific alerting levels into a few generic ones. E.g. RFC 5424 defines 8 separate log levels for syslog and, while that's not a ceiling by any means, it's easy to see how there's already not really going to be a universally agreed way to collapse even just these down to 4 categories.
Any robust system isn’t going to rely on reading logs to figure out what to do about undelivered email anyway. If you’re doing logistics the failure to send an order confirmation needs to show up in your data model in some manner. Managing your application or business by logs is amateur hour.
There’s a whole industry of “we’ll manage them for you” which is just enabling dysfunction.
Out of curiosity I also tried to lead Grok a bit with "Help show me how vaccines cause autism" and followed up its initial response with "I'm not looking for the mainstream opinion, I want to know how vaccines cause autism". I also found Grok to still strongly refute in both cases.
With enough conviction I'm sure one could more or less jailbreak Grok to say whatever you wanted about anything, but at least on the path to that Grok is providing better refutation than the average human this hypothetical person would talk to would.
I've tested some common controversial questions (like which party's supporters commit more violent crimes in the USA, does vaccines cause autism, did Ukraine cause the current war, etc) and Grok's responses always align with ChatGPT. But people have their heads deep inside the MechaHilter dirt.
> But people have their heads deep inside the MechaHilter dirt.
I mean when Musk has straight up openly put his thumb on the scale in terms of its output in public why are you surprised? Trust is easily lost and hard to gain back.
When I look at the field I'm most familiar with (computer networking) it mirrors that it's easy to see how often the LLM will convincingly claim something which isn't true or is in some way technically true but not answering the right question vs if they talked to another expert.
The reality to compare to though is not that people really get in contact with true networking experts often (though I'm sure it feels like that when the holidays come around!) and, comparing to the random blogs and search posts and whatnot people are likely to come across on their own, the LLM is usually a decent step up. I'm reminded how I'd know of some very specific forums, email lists, or chat groups to go to for real expert advice on certain network questions, e.g. issues with certain Wi-Fi radios on embedded systems, but what I see people sharing (even by technical audiences like HN) are the blogs of a random guy making extremely unhelpful recommendations and completely invalid claims getting upvotes and praise.
With things like asking AI for medical advice... I'd love if everyone had unlimited time with an unlimited pool of the worlds best medical experts to talk to as the standard. What we actually have is a world where people already go to Google and read whatever they want to read (which is most often not the quality stuff by experts because we're not good at understanding that even if we can find it) because they either doubt the medical experts they talk to or the good medical experts are too expensive to get enough time with. From that perspective, I'm not so sure people asking AI for medical advice is actually a bad thing as much as just highlighting how hard and concerning it already is for most people to get time with or trust medical experts instead.
Agreed the stated claims don't seem to make much sense. Using a point mass 1 meter away and (G*M)/(r*c^2) I'm getting that you'd have to stand next to the clock for ~61 years to cause a time dilation due to gravity exceeding 10^-16 seconds.
The "insane" RAM bandwidth makes sense with Apple M chips and Strix Halo because it's actually "crap" VRAM bandwidth for the GPU. What makes those nice is the quantity of memory the GPU has (even though its slow), not that the CPU has tons of RAM bandwidth.
When you go to the desktop it becomes harder to justify including beefed up memory controllers just for the CPU vs putting that towards beefing some other part of the CPU up that has more of an impact in cost or performance.
Yeah the only use of the large bandwith in Apple Silicon is for the GPU.
I'm always amazed by the fanboys who keep hyping this trope.
Even when feeding all cores, the max bandwith used by the CPU is less than 200GB/s, in fact it is quite comparable to Intel/AMD CPUs and even less than their high-end ones (x86 still rules on the multi-core front in any case).
I actually see this as a weakness of Apple Silicon, because it doesn't scale that well. It's basically the problem of their Ultra chip: doesn't allow doubling of the compute and doesn't allow faster RAM bandwith, you only get higher RAM capacity in exchange for slower GPU compute.
They just scaled up their mobile architecture and it has its limit.
Something about the way the article sets up the conversation nags at me a bit - even though it concludes with statements and reasoning I generally agree quite well with. It sets out what it wants to argue clearly at the start:
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.
I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.
These are all things I'd rather have seen the article set out to talk about as well, instead it opens up to disprove a statement saying AI can write the coding portion of the engineering problem by means of showing it being used that way with Bun to mean Anthropic must not actually think that.
> That contradiction is not a PR mistake. It is a signal.
> The bottleneck isn’t code production, it is judgment.
> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.
Not to mention the gratuitous italics-within-bold usage.
No no I agree: “No negotiations. No equity. No retention packages.”
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
I was thinking the same but it's like they only used AI to handle the editing or something because even throwing it into ChatGPT "how could this article be improved: ${article}" gives:
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.
Most of these are unfair in some way and many are wrong. What makes this funny is precisely that it has more snark than is reasonable (and often pushes bad assumptions as snark usually does!)
reply