> For example, if there is a formal language that contains "Good morning" and "My hovercraft is full of eels" as valid sentences, then nothing distinguishes these sentences any more.
Mind explaining a bit? Because I've no idea what you mean.
Aren't the elephants and whales orders of magnitude better than us at that though (they have roughly as many cancer as we do, but with respectively x100 and x1000 times as many cells.
Or is it the second layer that works better for them?
Different species do have different levels of protection, and different lineages tend to employ different methods of protection. For example elephants have numerous duplicates of cancer suppression genes, whereas naked mole rats produce a variant of hyaluronan which prevents tumor formation. When compared to other great apes, Humans seem to be worse at both layers of defense.
It's worth noting though that humans also have much higher levels of exposure to many carcinogens than most animals, and we screen for cancers at a much greater rate for humans, so just because a species has lower cancer rates doesn't necessarily mean their cancer defenses are better.
also, Elephants have a much higher copy number of a gene called p53/ It codes a protein that acts to force suicide in cells that have damaged DNA (think from UV light, cigarette smoke, age, etc). In cancer this is a common 'early' mutation that allows collection of further mutation and progession towards cancer. In having many more copies of p53, it makes it less likely the p53 function will be eliminated
I am not sure how much consensus there is around it but this is so cool I have to repeat it sorry: Whales and elephants do develop cancers but since those cancers also have mutations, well their cancers have cancers and overall the cancers are never able to grow big enough to threaten the whole organism.
The idea that tumors develop their own tumors, suppressing cancer is known as the Hypertumor Hypothesis and, while it works in computer models, there isn't actually any evidence backing it up.
The hypothesis doesn't really resolve Peto's paradox, the observation that cancer rates don't scale directly with the number of cells in an organism. Not only do large organisms like whales get fewer cancers per cell division, small animals like mice get more cancers per cell division, which can not really be explained by a threshold beyond which hypertumors suppress tumors. The actual evidence suggests organisms just evolve whatever level of cancer resistance they need to have low odds of dying of cancer before something else kills them.
That being said, the main observation underpinning Peto's paradox was actually just due to lack of good data. Over the years much more data has been collected from animal autopsies and it turns out that big animals do get cancer and cancer rates actually do scale with body size, just different species have varying levels of cancer protection, with the levels of protection being similar in closely related species of different sizes.
> The only way you can possibly view Safari as "the modern day IE" is if you consider the authoritative source for What Features Should Be Supported to be Chrome.
No. Safari is the modern IE in the sense that it's the default browser on a widely used OS, and it's update cycle is tied to the update of the OS itself by the user, and it drags the web behind by many years because you cannot not support its captive user-base.
It's even worse than IE in a sense, because Apple prevents the existence of an alternative browser on that particular OS (every non-safari OSes on iOS are just a UI on top of Safari).
But this can only be by comparison to something. And Apple is very good at keeping Safari up to date on the actual standards. You know—the thing that IE was absolutely not doing, that made it a scourge of the web.
So if it's not Chrome, what is your basis for comparison??
> But this can only be by comparison to something.
The something being the other browsers. Chrome and Firefox. Safari was even behind the latest IE before the switch to Chromium by the way.
> the thing that IE was absolutely not doing, that made it a scourge of the web.
You're misremembering, IE also kept improving its support for modern standards. The two main problems were that it was always behind (like Safari) and that it people were still using old versions because it was tied to Windows, like Safari with iOS. When people don't update their iPhone because they know it will become slow as hell as soon as you use the new iOS version on an old iPhone or just because they don't want their UI to change AGAIN, they're stuck on an old version of Safari.
I'm sorry, but you're wrong. I am not remotely misremembering, and I'll thank you not to tell me what's happening in my own head.
IE 6 stood stagnant for years, while the W3C moved on without them, and there was no new version.
> The something being the other browsers. Chrome and Firefox.
And can you name a single thing Firefox does right, that Chrome didn't do first, or that came from an actual accepted web standard (not a proposal, not a de-facto standard because Chrome does it), that Safari doesn't do?
The reason why IE 6 kept haunting us all was because later versions were never available on Windows XP.
> actual accepted web standard
The only thing for which there is an actual standard that matters is JavaScript itself (or rather ECMAScript) and on that front Apple has pretty much always been a laggard.
Saying “Apple is compliant with all of W3C standards” is a bit ridiculous when this organization was obsolete long before Microsoft ditched IE. And Apple itself acknowledge that, themselves being one of the founding parties of the organization that effectively superseded W3C (WHATWG).
> The reason why IE 6 kept haunting us all was because later versions were never available on Windows XP.
First of all, according to the IE Wikipedia page, that's not true—7 & 8 were available for XP.
Second of all, this ignores the fact that for five years, there was only IE6. And IE6 was pretty awful.
> Saying “Apple is compliant with all of W3C standards” is a bit ridiculous when this organization was obsolete long before Microsoft ditched IE. And Apple itself acknowledge that, themselves being one of the founding parties of the organization that effectively superseded W3C (WHATWG).
And now you have identified a major component of the problem: in the 2000s, the W3C was the source of web standards. Safari, once it existed, was pretty good at following them; IE (especially IE6) was not.
Now, there effectively are no new standards except for what the big 3 (Safari, Chrome, and Firefox) all implement. And Firefox effectively never adds new web features themselves; they follow what the other two do.
So when you say "Safari is holding the web back," what you are saying is "Safari is not implementing all the things that Google puts into Chrome." Which is true! And there is some reason to be concerned about it! But it is also vital to acknowledge that Google is a competitor of Apple's, and many of the features they implement in Chrome, whether or not Google has published proposed standards for them, are being implemented unilaterally by Google, not based on any larger agreement with a standards body.
So painting it as if Apple is deliberately refusing to implement features that otherwise have the support of an impartial standards body, in order to cripple the web and push people to build native iOS apps, is, at the very best, poorly supported by evidence.
That's what their marketing want you to believe, at least.
Their privacy policy is very clear it's not the case though:
> we may collect a variety of information, including:
> […]
> Usage Data. Data about your activity on and use of our offerings, such as app launches within our services, including browsing history; search history;
> collapsing literacy rates through the prevention of teaching phonics
Is that even a true thing?
I'm asking because in my country (France) this has been a talking point of the conservative party for the past 2 decades and it's also 100% a urban legend. So I wonder if they just imported a (real) US educational controversy or if it's a urban legend there as well and they just imported the bullshit.
The switch away from teaching phonics, and the consequent drop in literacy, is real.
It is not particularly something that was pushed by teacher unions.
The "three cueing model" was being pushed for some time as being more effective due to widely-promoted misunderstanding and misinformation by one guy whose name I'm afraid I've forgotten (I was reading about this a few months ago, and don't have the references to hand). It correctly recognizes that highly adept readers do not mentally sound out every word, but rather recognize known words very quickly from a few individual aspects of the word. However, this skill absolutely 100% requires having first learned the fundamentals of reading through phonics, and its proponents thought they could skip that step.
I'd like to read your sources on that, because from what I checked in the meantime it looks like it's more of a “culture war” thing that a real thing. See: https://www.sdkrashen.com/content/articles/great_plummet.pdf which provides figures for tests results between 1984 and 1990 showing no such decline over that period.
Also, the PDF I quoted is from 2002, 10 years after California had legislated in favor of phonics in 1992 (which had never stopped to be used no matter what the urban legend says).
You realize that your link talks about the same time period as mine, and not about something that allegedly happened in the “mid to late 2000”?
> Goodman's three-cueing idea formed the theoretical basis of an approach known as "whole language" that by the late 1980s had taken hold throughout America.
For some reason my brain read the title as “3D printed motherboard” and I was really curious about how this was even possible, and I ended up being disappointed by the lack of detail on the github readme.
It's only after a few more seconds back on the HN front page that I realized my mistake.
Less exciting than what I read but cool project nonetheless.
My understanding is that home etching is probably still more practical and neither of those are going to match professional quality, but conductive filament and the "print everywhere except where the metal goes and then add metal" options should both be in reach of the upper end of the hobbyist sector.
It's not exactly 3d printing but Bad Obsession Motorsports took a small mill, stuck a hot end into the tool holder, fed solder instead of filament into it, and "printed" traces onto a blank PC board.
I thought it was pretty clever but they admit it was tricky to make work at all, let alone get good results.
Yes but the fact it's primarily a Chinese export makes the profit as the cause narrative much less convincing. The US FDA is ignoring evidence to protect a Chinese supplier?
> Yes but the fact it's primarily a Chinese export makes the profit as the cause narrative much less convincing. The US FDA is ignoring evidence to protect a Chinese supplier?
Who said it was done to protect the pesticide's manufacturer? It protects the industry as a whole: the agro-industry aims for low costs, and that means using cheap pesticides to increase crop yield, even it it ends up harming farmers in the process.
I used to be a proponent of the industrial agriculture, because technological progress of all kinds (genetics, chemicals, mechanisation) are the reason why food is now abundant.
But the massive disinformation campaigns and targeted harassment of researchers, as well as the outright corruption of science is where they lost me. Surely you wouldn't do things like that if you had clear consciousness.
I don't think anyone believes the “if it compile it works” phrase literally.
It's just that once it compiles Rust code will work more often than most languages, but that doesn't mean Rust code will automatically be bug free and I don't think anyone believes that.
Yeah, even the official Rust book points it out and if my memory serves me right (not punintended) also gives an example in the form of creating a memory leak (not to be confused with memory unsafe).
As "unsafe". An example would be of how AMD GPUs some time ago didn't free a programs' last rendered buffers and you could see the literal last frame in its entirety. Fun stuff.
That is not a memory leak though! That's using/exposing an uninitialized buffer, which can happen even if you allocate and free your allocations correctly. Leaking the buffer would prevent the memory region from being allocated by another application, and would in fact prevent that from happening.
This is also something that Rust does protect against in safe code, by requiring initialization of all memory before use, or using MaybeUninit for buffers that aren't, where reading the buffer or asserting that it has been initialized is an unsafe operation.
It's a security hole. Rust doesn't prevent you from writing unsafe code that reads it. The bug wasn't that it could be read by a well conforming language, it was that it was handed off uninitialized to use space at all.
There are definitively people in the ecosystem who peddle sentiments like "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!", and that's the confusing part for many people. Examples found in 30 seconds of searching:
I read the comments you linked and don't really think they literally believe Rust is magic. I dunno though I guess I could imagine a vibe coder tacitly believing that. Not saying you're wrong. I just think most people say that tongue in cheek. This saying has been around forever in the Haskell community for decades. Feels like a long running joke at this point
When I’ve said that, I’ve meant that almost the only remaining bugs were bad logic on my part. It’s free from the usual dumb mistakes I would make in other languages.
I don't know the authors of those posts, so I don't want to put word in their mouth, but neither seem to be delusional about the "if it compiles, it works" phrase. The first one qualifies it with "most of the time", and the second one explicitly mentions using type state as a tool to aid correctness...
But I don't doubt there are people who take that phrase too literally, though.
Both examples you linked are people talking casually about the nature of Rust, rather than about the specific rule. That goes very much with your parent commenter's assertion that nobody takes it literally. The first example even starts with 'Most of the time' (this is true, though not guaranteed. I will explain further down). Human languages are imperfect and exaggerations and superlatives are common in casual communication.
But I have not seen any resource or anyone making technical points ever assert that the Rust compiler can verify program logic. That doesn't even make sense - the compiler isn't an AI that knows your intentions. Everybody is always clear that it only verifies memory safety.
Now regarding the 'most of the time' part. The part below is based purely on my experience and your mileage may vary. It's certainly possible to compile Rust programs with logical/semantic errors. I have made plenty. But the nature of C/C++ or similar manually memory-managed languages is such that you can make memory safety bugs quiet easily and miss them entirely. They also stay hidden longer.
And while logical errors are also possible, most people write and test code in chunks of sizes small enough where they feel confident enough to understand and analyze it entirely within their mind. Thus they tend to get caught and eliminated earlier than the memory safety bugs.
Now since Rust handles the memory safety bugs for you and you're reasonably good at dealing with logical bugs, the final integrated code tends to be bug-free, surprisingly more often than in other languages - but not every time.
There is another effect that makes Rust programs relatively more bug-free. This time, It's about the design of the code. Regular safe Rust, without any runtime features (like Rc, Arc, RefCell, Mutex, etc) is extremely restrictive in what designs it accepts. It accepts data structures that have a clear tree hierarchy, and thus a single-owner pattern. But once you get into stuff like cyclic references, mutual references, self references, etc, Rust will simply reject your code even if it can be proven to be correct at compile time. You have three options in that case: Use runtime safety checks (Rc, RefCell, Mutex, etc. This is slightly slower) OR use unsafe block and verify it manually, OR use a library that does the previous one for you.
Most of the code we write can be expressed in the restricted form that safe Rust allows without runtime checks. So whenever I face such issues, my immediate effort is to refactor the code in such way. I reach for the other three methods only if this is not possible - and that's actually rare. The big advantage of this method is that such designs are relatively free of the vast number of logical bugs you can make with a non-tree/cyclic ownership hierarchy. (Runtime checks convert memory safety bugs into logical bugs. If you make a mistake there, the program will panic at runtime.) Therefore, the refactored design ends up very elegant and bug-free much more often than in other languages.
> "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!"
I think is is a fairly bad example to pick, because the fact that the person says “I can basically think” and “most of the time it does” (emphasis mine) shows that they don't actually believes it will makes bug-free programs.
They are just saying that “most of the time” the compiler is very very helpful (I agree with them on that).
Mind explaining a bit? Because I've no idea what you mean.
reply