Hacker Newsnew | past | comments | ask | show | jobs | submit | timtadh's commentslogin

It looks like v2.1 has fixed that.

Also, +1 this is awesome.

@dang if you are here: HN should do this natively!


Having the Keybase link/proof in profile lets you prove identity without changing the site. It's worked fine so far. It's also how I found out about Keybase.


In response to several threads here: it is important to distinguish when scientists are self critical vs. when non-scientists are critical of the scientific method. For instance, there is a long history of scientists criticizing how the scientific process is currently conducted for the purposes of improving the scientific endeavor. That work is sometimes used by non-scientists who question the overall scientific method. However, such use is invalid as the scientific self-criticism

1. assumes the validity of the scientific method

2. relies on the scientific method as its critical lens

Whereas those who critique science as a whole:

1. assume that the scientific method does not work and does not arrive at "truth"

2. then use scientists being self critical to prove #1.

Such a "proof" does not work as there is its uses the assumption "the scientific method arrives at truth" to derive the contradiction "the scientific method does not arrive at truth". See for instance comment: https://news.ycombinator.com/item?id=16859200

In reality, work on reproducibility is about improving the practice of science overall. It does not in itself show that science is inherently untrustworthy. What it does show is that scientific discovery is difficult and it takes a lot of effort and new findings should be treated critically. What does critically mean in this context? It means with in the boundaries of science analyzing the theoretical basis, hypothesis, method, and experimental results for potential flaws. It does not mean to be skeptical as a default because science "doesn't work."


I agree with your overall point, but technically speaking it is logically valid to prove a hypothesis false by first assuming it and then deriving a contradiction, even when the contradiction is the negation of the original hypothesis (as it is in your example).

What you should have said is that some critics start with the premise "the scientific method does not arrive at truth", and then use other people's arguments that depend on the premise "the scientific method arrives at truth" to support their claim, which is indeed logically invalid.


>scientists criticizing how the scientific process is currently conducted for the purposes of improving the scientific endeavor.

I think what happening here is a bit more serious. They are showing a widespread crisis. It is not just some minor feedback to improve the process.

>It does not in itself show that science is inherently untrustworthy.

I think when statistics is involved, the results are inherently untrustworthy. This is not really surprising because there is a whole bunch of ways these studies that involve statistics could go wrong. And we are still finding new ways on how this could go wrong.

Then there are things like publication bias, that takes this to a whole new level. Things like that means that a biased body of journals can project any consensus that it favors just by selecting studies that fit its narrative. The inherent issues with statistics means that you can find studies that shows any possible outcome.


>"I think when statistics is involved, the results are inherently untrustworthy. This is not really surprising because there is a whole bunch of ways these studies that involve statistics could go wrong. And we are still finding new ways on how this could go wrong."

Another very real issue here is that malicious use of statistics can be used to show nearly anything in ways that can be extremely difficult to detect, even when the maliciousness is hidden in plain sight. And then going a step beyond that there's plain old number fudging which is almost impossible to prove since variance works as sufficient plausible deniability. And finally there is of course plain old ineptitude. Like you mention even when trying to do things completely by the book, statistics are incredibly difficult to get right.

Something that comes to mind here is the recent MIT study stating that Uber drivers earned $3.37/hour. That study was completely broken. [1] It's debatable whether the cause was maliciousness or ineptitude, but the point is that these problems arise, with a disturbing regularity, even when the most reputable of names are attached to them.

[1] - https://qz.com/1222744/mits-uber-study-couldnt-possibly-have...


Feynmann said, "The first rule is that you must not fool yourself. And you are the easiest person to fool."

There can be malice, sure. But there can also be desire to believe. And, hey, here's a statistical analysis that shows what the investigator is already biased to believe anyway...


>I think when statistics is involved, the results are inherently untrustworthy.

Ummm...are you kidding? Statistically vetted results are inherently UNCERTAIN, but how could they possibly be inherently untrustworthy?

Even if a mechanistic effect is observed, its relationship to a particular cause or influence is only established statistically. In fact, the very observation is often performed u Dee the umbrella of statistical calibration of appropriate instruments.

  As Pearson said, "Statistics is the grammar of science".


I cannot speak for the person you are responding to, but in my [agreement] of his critique of statistics, I am implicitly speaking of social statistics. I think there is a vast difference in e.g. a statistical modeling of the behavior of electrons and e.g. the statistical modeling of some sort of human behavior.


...but that just isn't true. Statistical analysis is (among other things) a way to quantify our uncertainty. If done approriately, the statistics simply communicate the role probability played in moving from the experimental premises to the results.

  The level of uncertainty in most (perhaps all) experiments involving particles in a vacuum is far lower than experiments involving human behavior. Statistics doesn't create the uncertainty, it communicates the uncertainty.

  I can use propositional logic to arrive at the conclusion that "Unicorns are real" yet I don't encounter many people throwing logic itself under the bus because of that. 
People lie while speaking English every day yet I don't encounter many people claiming that English is inherently untrustworthy.

  Statistics has been a whipping boy for a long time by people who don't know much about statistics or just fail to think clearly about what is being said.


> If done approriately..

This is the whole point. It is the difficulty in doing so, or the difficulty in deciding if this was done appropriately or not is the only thing that it untrustworthy..

>People lie while speaking English every day yet I don't encounter many people claiming that English is inherently untrustworthy.

No one expect a statement to be true simply because it was uttered in English.


I'm not entirely sure what you're trying to say, and your post formatting is not helping.

I would recommend reading the article. The executive summary summarizes things. The issue is not statistics in and of themselves, but how they are used in the overwhelming number of studies - particularly those in the social sciences or related with human physiology.


I too wish there was more standard containers available in Go's standard library. However, I don't think there will be a collections package unless and until generics make it into the language. That said, there are some pretty good libraries out there:

- https://github.com/timtadh/data-structures ## I wrote this one.

- https://github.com/Workiva/go-datastructures ## this one is also popular.

- https://github.com/golang/go/wiki/Projects#data-structures ## big list here.


Thank you for the links. I hope Go gets a good, solid container library that is reputable and widely used. And even better if that makes into Go proper some day.


Most people who criticize the Dragon (Compilers etc... by Aho et al.) book seem to focus on chapters 3 and 4 which are the chapters on lexical analysis and parsing. The book has 12 chapters. Whatever your feelings on the parsing techniques, the book covers WAY more than that. It has a really good introduction to code generation, syntax directed translation, control flow analysis, dataflow analysis, and local, global and whole program optimizations.

As someone who has quite a few books on compilers, program analysis, type theory, etc... I find the Dragon book an irreplaceable reference to this day. It has a breadth of content shared by very few other books. For instance, Muchnick's classic "Advanced Compiler Design and Implementation" is really good for analysis and optimization but neglects all front end topics. The only area where I believe the Dragon book is inadequate in is type theory (I recommend Types and Programming languages [TAPL] by Pierce and Semantics with Application by Nielson for a gentler intro).

As to parsing, its chapter on parsing (4) is not as "hip" has some people want. However, it is solid and will teach you how to do parsing. There are newer and fancier techniques not covered in Chapter 4 but in general most people would benefit just having a solid understanding of recursive descent parsing!


Where I live $80k is a significant portion of the cost of most houses. The price will be a significant barrier to entry for houses outside of hot real estate markets.


Yeah, my brother just bought a pretty large house for $100k, but I guess people in rural North Carolina aren't the target buyer of solar roofs anyway.

I wonder on what price home would a $80k roof make sense? >$800k?


That sounds about right to me.

A roof that exceeds 10% of the purchase price is likely to push the house outside of the price range of buyers in the target market.

I know a friend of mine (who has a solar system, not quite the same) but basically got almost nothing for it. The aesthetics of a "Solar Roof" is better but I don't see it raising the value of the house by more than 10%.

I used their calculator using the estimates when I had my roof done last time, and it is just not economically feasible.

$60k roof vs. $6k roof.

Net of $12k savings over 30 years.

$60k over 30 years (adjusting for inflation in conservative investments) is easily going to hit $180k.


I have a $2M house, and I've been bitching and moaning for a year that a new roof cost us $20k.


I dunno, I'm in the Dallas/Ft Worth area, and we have abundant sunshine and I think the price is still going to be a barrier.

But like others have said... there are plenty with a lot more money than I have which will jump at this. I'm ok with that. The more wealthy folks that install these and take their load off the grid, the better.


> And there goes Shakespeare. On average there's 15,000 words per play. I bet most readers do not have the education to know every single word in that 15,000.

In English (as I assume in Chinese) you can usually figure out what is being said in Shakespeare even if you don't know the exact definition of the word. You also can usually pronounce it correctly (at least for the modern pronunciation).


Absolutely. Chinese too. There are patterns among words. Also if you know part of the phrase not hard to figure out either. Once you know the language you will whether it is Chinese or Korean.


@suryabhupa How similar is this work to the Grammatical Inference field? There has been a lot of work over the years in specification inference which feels similar. Many of the studies in specification inference learn automata representations of object interactions. I know there have been other application grammatical inference in Software Engineering as well.


Program synthesis is grammatical inference grown up, and statistical approaches are being experimented w/ for modern synthesis just as they were for the genetic programming & grammatical inference era. (I believe even now at the SAT solver level today.)

At a quick skim, this seems fun more as (1) an experience report of jumping on the DNN train instead of other ML algs and (2) more intriguing to me, the training formulation (irrespective of neural nets). Dawn Song's recent explorations here also sounded pretty interesting in terms of bridging logical synthesis of general programs with statistical..


Grammatical inference learns a grammar from a set of examples, where here it seems the paper is learning a program (a derivation in the grammar) from examples.

Which Dawn Song paper are you talking about here? I think among all the recent approaches proposed recently on neural program induction, this is the first one that is end-to-end trained and learns only from input-output examples without any hacks!


I case people didn't click through it is an awesome comment by the original author of the IE5 DOM Tree explaining how it was implemented.


Then again Steve Cook who proved the existence of the class NP-Complete was awarded the Turing Award only 11 years after his paper was published: http://amturing.acm.org/award_winners/cook_n991950.cfm


I guess I live in a small world because I have no idea how to type an umlaut on a US Qwerty keyboard but I can easily type Paul Er\"{o}s.


The irony is that it's Erd\H{o}s


Have you tried the setxkbmap -layout us -variant altgr-intl keyboard variant? See https://www.google.com/amp/s/zuttobenkyou.wordpress.com/2011...


Option-u o


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: