Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's just handwaving. Assume for a second that your reader knows about referential integrity, has taken advanced FP in university, programmed in/with and implemented higher order mechanisms.

Assume that the reader also knows how substitutability makes some variants of FRP much, much harder to both implement and reason about, has Haskell code in production that only two people at the company claim to understand, knows about the Principle of Least Expressiveness": "When programming a component, the right computation model for the component is the least expressive model that results in a natural program." And also knows about how to interpret it, namely: "Note that it doesn't say to always use the least possible expressive model; that will result in unnatural (awkward, hard to write, hard to read) programs in at least some cases" (http://c2.com/cgi/wiki?PrincipleOfLeastPower)

Assume that the person has seen people be very productive with simple dataflow tools such as Quartz Composer, and then hit a wall because that simpler programming model doesn't just have a low threshold, it also has a low ceiling. And has read in a report on 15+ years of constraint programming that students consistently found it difficult to think ("reason") in a functional style. (see http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and...)

So.

What evidence do you have for your assertions? Not arguments or logical inferences from your assumptions, but actual evidence?



I disagree that it is just handwaving. I assert that I gave lots of evidence, namely that the scope of what you need to test to assure correct operation of your program is much narrower.

You can reason about that intuitively even, without trying to prove anything. The less stuff I need to test, the less stuff that my brain needs to think about.

I'm not quite sure what kind of evidence you're looking for. I'm not quite sure that you calling all of the evidence I provided as handwaving is valid, either!

But I can give you all kinds of anecdotal evidence if you like? I gave, I think, two pretty concrete but general examples.


You are still confusing logical inferences from unproven assertions with evidence. You claim that what you need to test is narrower, and I understand your reasoning as to why you think that this is true, but there is no evidence that your claims are actually true.

On the other hand, there was a paper just recently (can't currently find it), with empirical evidence that surprisingly few tests were required to weed out large swathes of potential misbehavior, much fewer than could reasonably be expected.

Or there is Robert Smallshire's data mining of github errors that showed type errors in dynamically typed languages as being exceedingly rare (1-2%), when everyone just knows that they must be common and problematic.


We could both give each other empirical evidence all day.

I wouldn't really call my assertions unproven. Speaking of referential transparency and equational reasoning is kind of the core of a lot of computer science.

I'm not really sure what kind of evidence you are asking for.

I will say that I think some programming languages are more interesting than others, and some are a more productive, better use of time than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: