Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That gnarly horrid mess that only a few greybeards grok and has massive test coverage, a long tail of requirements enforced by tests and experience, and a culture of extreme rigor? Longer reviews, more rollbacks, and less likely to be rewritten.

I'd say that this is likely the most likely to be rewritten actually, because high test coverage is a massive enabler in such a rewrite, and because having a project that “only a few greybeards grok” sounds like a big organizational liability.

That being said, and while I'm pretty convinced that Rust bring massive benefits, I agree with you that these measurements shouldn't be taken as if it was a rigorous scientific proof. It's more of one additional anecdotal evidence that Rust is good.



> It's more of one additional anecdotal evidence that Rust is good.

But that means it's likely to be the worst kind of science:

- Group of people agree that Rust is good. This is a belief they hold.

- Same group of people feel the need to search for argument that their belief is good.

- The group does "science" like this.

And then the rest of us have a data point that we think we can trust, when in reality, it's just cherry picked data being used to convey an opinion.


> And then the rest of us have a data point that we think we can trust, when in reality, it's just cherry picked data being used to convey an opinion.

Calling what Google did here "science" and cherry picked is quite a disservice. It's observational data, but do you have any objection to the methodology they used? Or just (assumed?) bad vibes?


In science, you go out of your way to control for confounding factors.

This isn't that.


> In science, you go out of your way to control for confounding factors.

There's tons of observational science done in a very similar fashion to the article where there is simply no way to control for confounding factors for the same reason that there is simply no way to properly control for it in the data available.


It’s a good start and even if error bars are wide enough to land a 747 in them their numbers show orders of magnitude scale differences. This should raise eyebrows at the very least in the biggest skeptics.


Going out of your way would involve committing unethical experiments, which is absolutely frowned upon by scientists.

And many experiments are simply impossible to do in a manner that completely removed every outside factors. But that doesn't mean that an experiment's results are immediately bad.


Having been close to someone who went through the PhD process to a career in research, this is a sadly common but romantic and incorrect view of science as practiced in the world today.


A lot of what folks call science isn't science.

So, I'm not being romantic. I'm being realistic. And I'm happy to call B.S. on a lot of published research, because doing so gives me more predictive power than that research


I'm the first one to be annoyed when political scientists and economists pretend to do science when they are just extrapolating from anecdotal correlations, but here this isn't something being published in a scientific journal and nobody claim they are doing “science” in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: