General recursion in programs (that makes termination undecidable) requires a fix-point operator of type (a -> a) -> a
The logical equivalent of this is an axiom (P -> P) -> P which lets you prove anything. So non-termination in a programming language corresponds to being able to prove anything - including False - in the logic.
I think SSA is related to CPS, but I'm not sure of the correspondence with Linearity. I think the key difference (but I'm not sure), is that SSA only asserts that a variable is assigned once, but might be consumed multiple times. Linearity says that things must be consumed exactly once. For instance, I think:
y <- x + 1
z <- x + 2
is a legal SSA expression, but it is not a legal linear expression because x appears twice.
I think you could say that. Another way to look at it would be to say it's about organising your code in such a way that side-effects don't really matter. For instance, if only I have access to a reference, then I'm free to update it without risk of disrupting anyone else's world view. You could say that there is no side-effect from my update because no-one can observe the change.
This is nice for functional programming because it gives you the performance of mutable state and destructive update, but in a way that is 'pure', or side-effect free, from the outside.
I also get the impression that some people view using dynamically typed languages as a badge of honor, taking more expertise to harness the greater 'expressive' power, all whilst juggling the types in your head. Bad programmers can't use them properly, but if you're one of the good ones then you're not held back by rigid types.
I'm one of the dumb ones and just let the computer do the checking for me.
It's already been flirted with in JS I believe [1], though not specifically for Flow. The problem is that it falls down in the higher order case, which happens quite alot in JS. Also, I don't think JS has a mechanism like sys.setprofile that deals with alot of the pain points.
I'd consider those languages optionally typed, rather than gradually typed, as they don't insert run-time type checks. Regardless, I still think it's all interesting and valuable work.
I failed to find F# computation expressions, probably because I searched for the word "monad" which they seem to be avoiding, but that's another great example.
I think the jump from 'Category' to 'Value: Anything that can be assigned to a variable.' was a somewhat jarring change in abstraction level.
Also, trying to explain Monad, Comonad, or Applicative as 'jargon' is probably a step too far IMO. They are not important for getting started with FP, and describing them without their equational properties is kind of meaningless.
That being said I liked alot of the inclusions; partial application, closure, composition. I think the collection is probably slightly guilty of trying to be too clever by including advanced concepts.
These terms provide a concise way to communicate more complex concepts in a technical, specialized context: practically the definition of "jargon."
But that's also the reason I agree with you as regards their utility in most programming. Simply knowing the "what" of the definitions is barely a start. Those who know the definitions and are comfortable enough with the material to use them effectively would find little value in lists of this sort. For most (almost all) programming efforts outside a very narrow niche of academic applications it's not even necessary.
I think it makes sense to include some of the advanced jargon in a list of this kind because they're words that people are likely to encounter in online discussions about functional programming.
I'm not really sure why you need the requirement of infinite sequences to do functional programming, or the restriction of no intermediate variables. Let bindings seem like a nice thing to have.
The logical equivalent of this is an axiom (P -> P) -> P which lets you prove anything. So non-termination in a programming language corresponds to being able to prove anything - including False - in the logic.