Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"[They] break referential equality and equational reasoning"

Please explain?

Mutations can cause side-effects but don't have to. You seem to think that limiting program behavior _necessarily_ alleviates informal reasoning, but that's not the case. Your argument is also highly hyperbolic in terms of testing. A properly designed system with immutability needs to be tested no less than properly designed system with mutability.



Sure!

Mutations ARE side effects. Consider two situations with a function and a mutable variable:

First: Your function depends on a value that can mutate. Now you have a hidden parameter to your function. You have to test the function under all reasonable conditions by which the value can change. The function is not isolated from the rest of your program any more, and you have to check who has access to that value. That's a lot to keep track of.

Second: Your function mutates a value. Now you have to keep track of everyone who references that value, and everywhere your function is invoked.

In both situations, the number of scenarios you must test is greatly expanded compared to when you have a 'pure' function that does not operate on mutable memory.

Referential transparency means that for a fixed input value you can replace a function invocation with a static value. It's just a mapping that is always the same, whether it's computationally expensive or not. That means that you can always expect the same behavior when you compose that function. But even more important, you can assert that your function is equal to something else -- always, under all conditions.

This means that you can make a series of equational substitutions. And THAT means that you can use equational reasoning to prove things about your function.

And if your program is just a bunch of functions that are composed together, well then you can prove things about your program, too.

Programs are getting really complex. Really, really, really complex. Having the power to make more safe assertions about your programs is becoming more important. Being able to prove things about your programs using well known proving techniques, such as those borrowed from more formal maths like abstract algebra, is really useful.

You still need to test, but what you need to test becomes of much narrower scope and much less cumbersome. This is why functional programming is indeed becoming more popular.


If a function receives immutable state, returns immutable state, and does not depend or modify outside mutable state, then that function has no side effects. Mutating state within the function itself will not affect anything outside it and so mutations are not side effects, at least by default.

About your two situations, I am guessing you mean that the functions rely on state not being passed in, but rather on state referenced from the outer scope? I agree that that complicates both reasoning and testing, and so it should be avoided. In C/C++ this type of design (i.e. relying on static scope state) is practically always avoided as there's rarely any need for it.

If instead you mean that mutable references are being passed in as parameters then logic wise isn't this similar to assignment/binding in a functional language as long as you avoid propagating the side effect to other functions?

I am personally a great fan of immutability, just not functional languages. I can see the justification for limiting the tools at your disposable to guarantee certain things, it's just that in this case I think they are not sufficient.

---

For those downvoting, gee thanks. I see how trying to have a discussion is hurting your feelings.


Yeah! If your mutability is completely limited to the scope of your function, then you should be fine.

If your programming language takes the power to mutate away from you then you have even less to worry about: You don't have to rely on self-discipline to not write a bad program, the compiler tells you it's not good.

Even in really small functions, with tiny scopes, I think reasoning about mutation takes a tidy mental toll. You can express about almost everything you need (well, I don't really know you or what you are programming) with maps and folds and recursion.

In object oriented languages the traditional belief has been that encapsulation with privacy modifiers and getters and setters is sufficient to make well behaved programs. I don't think that goes far enough. I think that to reason about programs effectively you need some sort of immutability guarantee.


That's just handwaving. Assume for a second that your reader knows about referential integrity, has taken advanced FP in university, programmed in/with and implemented higher order mechanisms.

Assume that the reader also knows how substitutability makes some variants of FRP much, much harder to both implement and reason about, has Haskell code in production that only two people at the company claim to understand, knows about the Principle of Least Expressiveness": "When programming a component, the right computation model for the component is the least expressive model that results in a natural program." And also knows about how to interpret it, namely: "Note that it doesn't say to always use the least possible expressive model; that will result in unnatural (awkward, hard to write, hard to read) programs in at least some cases" (http://c2.com/cgi/wiki?PrincipleOfLeastPower)

Assume that the person has seen people be very productive with simple dataflow tools such as Quartz Composer, and then hit a wall because that simpler programming model doesn't just have a low threshold, it also has a low ceiling. And has read in a report on 15+ years of constraint programming that students consistently found it difficult to think ("reason") in a functional style. (see http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and...)

So.

What evidence do you have for your assertions? Not arguments or logical inferences from your assumptions, but actual evidence?


I disagree that it is just handwaving. I assert that I gave lots of evidence, namely that the scope of what you need to test to assure correct operation of your program is much narrower.

You can reason about that intuitively even, without trying to prove anything. The less stuff I need to test, the less stuff that my brain needs to think about.

I'm not quite sure what kind of evidence you're looking for. I'm not quite sure that you calling all of the evidence I provided as handwaving is valid, either!

But I can give you all kinds of anecdotal evidence if you like? I gave, I think, two pretty concrete but general examples.


You are still confusing logical inferences from unproven assertions with evidence. You claim that what you need to test is narrower, and I understand your reasoning as to why you think that this is true, but there is no evidence that your claims are actually true.

On the other hand, there was a paper just recently (can't currently find it), with empirical evidence that surprisingly few tests were required to weed out large swathes of potential misbehavior, much fewer than could reasonably be expected.

Or there is Robert Smallshire's data mining of github errors that showed type errors in dynamically typed languages as being exceedingly rare (1-2%), when everyone just knows that they must be common and problematic.


We could both give each other empirical evidence all day.

I wouldn't really call my assertions unproven. Speaking of referential transparency and equational reasoning is kind of the core of a lot of computer science.

I'm not really sure what kind of evidence you are asking for.

I will say that I think some programming languages are more interesting than others, and some are a more productive, better use of time than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: