Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me this is a weird list. It's a bit like removing the personality of a person and then claiming that made the person better.

Also a lot of the things he lists are not at all as clearly harmful as he makes them out to be.

GOTO: Used in some of the cleanest code bases. For example the lua programming language, redis db, and linux kernel OS. Yes the use is limited, but it's there not because of mistakes but rather because it is sometimes the most clean way.

Numbers: I don't understand this one. In the very least having different types of numbers seems to be a necessity for anything performance oriented such as graphics. JS not having them (they seem to have been added for WebGL as an extension, no?) is not a good reason for all new languages to do the same.

Mutations: I am personally of the view that mutation often simplifies the informal reasoning complexity of a program. There's a reason functional programming hasn't caught on and it likely never will. A hybrid approach of having both immutability and mutability like Rust seems most sensible.

Reference equality (Referential transparency?): If you have immutability do you not get this for free? How can you be sure about getting rid of mutations but not be sure you want objects to be equal based on data?

Each language has it's own quirks that make it special and that's part of the reason programming is fun. There's never a single right way.



GOTO: Agreed, it can be a very effective way of error handling in exception free languages.


GOTO, mutations, and side-effects break referential equality and equational reasoning, making it much harder to prove things about your program. I guess you can do that by endless permutations of tests or a stochastic testing system, but that sounds like a horrible way to spend your time.


A program that's harder to prove correct is not necessarily incorrect.

I'd actually argue that most of the software that runs our lives today is impossible to prove correct and yet seems to be doing quite alright overall.

You're setting up a similar false dichotomy as people who say that programs that ship without automated tests are worthless and broken.


Nah, I am saying two things:

1. Mutability makes your program harder to reason about.

2. There are better ways to spend your time than writing tests for conditions that could not exist with pure functions.

You could totally extrapolate a ton of statements from those two statements though. But I'm not doing that now, and not here.


> 1. Mutability makes your program harder to reason about.

How so?


I give an example in a sibling (cousin?) comment here: https://news.ycombinator.com/item?id=9372280


You are conflating "proving correct", which is exceedingly rare and "reason about", which is extremely common.`

Functional languages may make the former easier (though from the proofs we did in University, I don't really see it), but they make the latter much harder in many cases.


I disagree. I think reasoning in a programming language that limits effects with types and promotes partitioning them from the rest of your logic is dramatically simpler than in languages where mutation and effects can happen anywhere.


> mutation and effects can happen anywhere.

That's a red herring and a scare-story that FPers tell each other. They can't happen "anywhere". They happen where you tell them to happen.

Do you have any evidence that actual reasoning is simpler?

"At the end you usually get a tight loop that is easy to follow. It is also much more imperative/operational than before, which may bug Haskell-style people." -- Jonathan Blow

https://twitter.com/jonathan_blow/status/588007366618062848

"I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn't hard.

Having written it a couple of years ago, I haven't had occasion to use it once. Instead, I just use "for" loops.

You shouldn't use it either." -- Rob Pike

https://github.com/robpike/filter

This matches my experience pretty well. When I first created Higher Order Messaging, I was also really into creating chained higher order ops. After a little while, I noticed that a small for-loop was actually more readable than the really cool filter-chains I had created. Less cool, but more readable, comprehensible, easier to reason about, not least because of named intermediate values, something that tends to get lost in point-free style (and trust me, having been a heavy user + implementor of Postscript, I know a little about point-free style).


In a strongly typed functional program I can prove that side effects only happen in specific places that are designated by the types. I have no such assistance in say Java, or Python. That is what I mean that side effects can happen anywhere.

I would call referential transparency, which gives rise to equational reasoning, very strong evidence that reasoning becomes simpler.


You're still conflating "prove" with "reason". I don't need to prove things about my code in order to reason about it. For example, I can just look at the code in question to know whether it has side effects, and most code doesn't.

In fact, most proofs I have seen don't particularly help me reason about the code in question. On the other hand, a simple imperative execution model does help me reason about the code.


I don't think I am doing any such conflating.

Being able to reason about things equationally makes my code easier to reason about. Having a type system and a compiler assist me makes my code easier to reason about. Having fewer variants makes my code easier to reason about. Not needing to know the order in which functions have been invoked to know their return values makes my code easier to reason about.

However, I will make the statement -- and this is indeed conflating! -- If I can prove something about my code, then it is easier to reason about. The proof can be simple or complex.


Again, you are just repeating your assertions and somehow think that simply asserting them makes them true. It does not, and I don't think I can make you understand the difference between assertions and evidence, so let's call it a day.


I think my assertions are pretty well accepted virtually ... everywhere, so it didn't really occur to me THOSE specifically were what you were calling into question. But if THAT is what we are arguing about, I agree, this is all rather cyclical and pointless.


They're not. Or rather, you have a very narrow definition of "everywhere". And even if they were, that doesn't make them true without evidence, quite the contrary, that makes them especially suspect (groupthink etc.). Also, if you're so sure they are universally accepted, why the need to argue them at all? And of course, if they are so universally true, it should be trivial to actually come up with actual evidence, which hasn't been the case.

Something to think about. Maybe.


"[They] break referential equality and equational reasoning"

Please explain?

Mutations can cause side-effects but don't have to. You seem to think that limiting program behavior _necessarily_ alleviates informal reasoning, but that's not the case. Your argument is also highly hyperbolic in terms of testing. A properly designed system with immutability needs to be tested no less than properly designed system with mutability.


Sure!

Mutations ARE side effects. Consider two situations with a function and a mutable variable:

First: Your function depends on a value that can mutate. Now you have a hidden parameter to your function. You have to test the function under all reasonable conditions by which the value can change. The function is not isolated from the rest of your program any more, and you have to check who has access to that value. That's a lot to keep track of.

Second: Your function mutates a value. Now you have to keep track of everyone who references that value, and everywhere your function is invoked.

In both situations, the number of scenarios you must test is greatly expanded compared to when you have a 'pure' function that does not operate on mutable memory.

Referential transparency means that for a fixed input value you can replace a function invocation with a static value. It's just a mapping that is always the same, whether it's computationally expensive or not. That means that you can always expect the same behavior when you compose that function. But even more important, you can assert that your function is equal to something else -- always, under all conditions.

This means that you can make a series of equational substitutions. And THAT means that you can use equational reasoning to prove things about your function.

And if your program is just a bunch of functions that are composed together, well then you can prove things about your program, too.

Programs are getting really complex. Really, really, really complex. Having the power to make more safe assertions about your programs is becoming more important. Being able to prove things about your programs using well known proving techniques, such as those borrowed from more formal maths like abstract algebra, is really useful.

You still need to test, but what you need to test becomes of much narrower scope and much less cumbersome. This is why functional programming is indeed becoming more popular.


If a function receives immutable state, returns immutable state, and does not depend or modify outside mutable state, then that function has no side effects. Mutating state within the function itself will not affect anything outside it and so mutations are not side effects, at least by default.

About your two situations, I am guessing you mean that the functions rely on state not being passed in, but rather on state referenced from the outer scope? I agree that that complicates both reasoning and testing, and so it should be avoided. In C/C++ this type of design (i.e. relying on static scope state) is practically always avoided as there's rarely any need for it.

If instead you mean that mutable references are being passed in as parameters then logic wise isn't this similar to assignment/binding in a functional language as long as you avoid propagating the side effect to other functions?

I am personally a great fan of immutability, just not functional languages. I can see the justification for limiting the tools at your disposable to guarantee certain things, it's just that in this case I think they are not sufficient.

---

For those downvoting, gee thanks. I see how trying to have a discussion is hurting your feelings.


Yeah! If your mutability is completely limited to the scope of your function, then you should be fine.

If your programming language takes the power to mutate away from you then you have even less to worry about: You don't have to rely on self-discipline to not write a bad program, the compiler tells you it's not good.

Even in really small functions, with tiny scopes, I think reasoning about mutation takes a tidy mental toll. You can express about almost everything you need (well, I don't really know you or what you are programming) with maps and folds and recursion.

In object oriented languages the traditional belief has been that encapsulation with privacy modifiers and getters and setters is sufficient to make well behaved programs. I don't think that goes far enough. I think that to reason about programs effectively you need some sort of immutability guarantee.


That's just handwaving. Assume for a second that your reader knows about referential integrity, has taken advanced FP in university, programmed in/with and implemented higher order mechanisms.

Assume that the reader also knows how substitutability makes some variants of FRP much, much harder to both implement and reason about, has Haskell code in production that only two people at the company claim to understand, knows about the Principle of Least Expressiveness": "When programming a component, the right computation model for the component is the least expressive model that results in a natural program." And also knows about how to interpret it, namely: "Note that it doesn't say to always use the least possible expressive model; that will result in unnatural (awkward, hard to write, hard to read) programs in at least some cases" (http://c2.com/cgi/wiki?PrincipleOfLeastPower)

Assume that the person has seen people be very productive with simple dataflow tools such as Quartz Composer, and then hit a wall because that simpler programming model doesn't just have a low threshold, it also has a low ceiling. And has read in a report on 15+ years of constraint programming that students consistently found it difficult to think ("reason") in a functional style. (see http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and...)

So.

What evidence do you have for your assertions? Not arguments or logical inferences from your assumptions, but actual evidence?


I disagree that it is just handwaving. I assert that I gave lots of evidence, namely that the scope of what you need to test to assure correct operation of your program is much narrower.

You can reason about that intuitively even, without trying to prove anything. The less stuff I need to test, the less stuff that my brain needs to think about.

I'm not quite sure what kind of evidence you're looking for. I'm not quite sure that you calling all of the evidence I provided as handwaving is valid, either!

But I can give you all kinds of anecdotal evidence if you like? I gave, I think, two pretty concrete but general examples.


You are still confusing logical inferences from unproven assertions with evidence. You claim that what you need to test is narrower, and I understand your reasoning as to why you think that this is true, but there is no evidence that your claims are actually true.

On the other hand, there was a paper just recently (can't currently find it), with empirical evidence that surprisingly few tests were required to weed out large swathes of potential misbehavior, much fewer than could reasonably be expected.

Or there is Robert Smallshire's data mining of github errors that showed type errors in dynamically typed languages as being exceedingly rare (1-2%), when everyone just knows that they must be common and problematic.


We could both give each other empirical evidence all day.

I wouldn't really call my assertions unproven. Speaking of referential transparency and equational reasoning is kind of the core of a lot of computer science.

I'm not really sure what kind of evidence you are asking for.

I will say that I think some programming languages are more interesting than others, and some are a more productive, better use of time than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: