Calling HLists “collections” is misleading. In spite of their name, HLists are actually record types. The only actual list involved is a compile-time list of component types used to form a record type.
To give a perhaps odious but relatable analogy, a form is a list of questions, but a filled form is not a list of answers - it is a record of answers to the questions in the form.
In general, Haskell does not do parametric polymorphism through monomorphization. In particular, higher-rank polymorphism becomes unusable if polymorphism is implemented through monomorphization. On the other hand, if I recall correctly, Rust only supports higher-rank polymorphism for lifetimes, which neither have nor need a runtime representation. This is why monomorphization is a viable implementation strategy for Rust generics.
(Aside: Type checkers essentially see recursive function definitions as applications of a fixed point operator to non-recursive functions. If your function has a rank-1 type but uses polymorphic recursion, the type checker sees it as the application of a rank-2 fixed point operator to a non-recursive function. This is why I see polymorphic recursion as “morally higher-rank polymorphism”, even when the type signatures in your code are ostensibly rank-1 ones. Polymorphic recursion is widely used in Haskell.)
However, IMO, you only need rank-1 polymorphism 95% of the time anyway, so optimizing for the common use case is a good strategy. By far, the main use case for generics is implementing efficient and reasonably reusable data structures and algorithms in a reasonably type-safe way. For this use case, monomorphization and aggressive inlining of small functions are evidently the right things to do. Other uses of generics (say, streaming I/O frameworks) strike me as a lot more questionable.
Two words: loop invariant. Implement a system that figures out the right loop invariant given a problem description (expressed however you want), and you will have made a lot of progress.
Optionals are a better alternative to null. They compose better (i.e., they nest) and play nicely with data abstraction (i.e., you can define an abstract type that hides the fact that its underlying representation is optional), unlike null.
Typed Racket is more ambitious than other attempts at adding types to an underlying untyped language. Namely, Typed Racket guarantees that typed code is never to blame for certain contract violations, and, if any such contract violation happens, it will be properly traced back to an offending piece of untyped code. This is what makes gradual types gradual (as opposed to merely optional), alas, it is also what has been found to have unacceptable overhead.
> It doesn't try to analyze and compare existing programming languages.
It does. For example, this theory identifies when and how incorrectly designed programming languages fail to enforce abstractions, very much like how the theory of database normalization identifies when and how incorrectly designed database schemata fail to enforce data integrity constraints.
But perhaps what you mean is “it doesn't try to view existing programming languages under an unwarranted positive light”.
Programming language theory has no concept of "correctly-" or "incorrectly-designed." It may show some properties of some languages, and researchers may express their opinion that they believe those properties are inadvisable for programmers. Programming language theory does not define a metric for what's good or bad, and, in fact, programmers are often (though not always) better judges of that than theorists.
I certainly didn't mean to imply that the programming languages ignored by these reasearchers are any good, on the contrary. It's just that I find the way these researchers sell themselves questionable. They don't have a theory of programming languages, they have a theory of some variant of the lambda calculus. It would be more honest to call it just that.
The lambda calculus is a very simple set of rules which is Turing complete— whichever kind of programming language you prefer can be broken down to lambda calculus.
Let me clarify---some version of typed lambda calculus. The document linked here doesn't seem to deal with the untyped lambda calculus and much less so with properties that are stable under change of representation of computable function.
I don't understand in what sense programs can be called “differentiable”. Is the space of programs modulo observational equivalence a manifold to begin with? (I don't think it's Hausdorff or even T1, but I could be wrong.)
The examples given in the article are merely derivatives of ordinary mathematical functions defined by ordinary mathematical expressions - in particular, there are no sequencing, no conditionals and no loops. So why call them “differentiable programs” when you are actually dealing with ordinary differentiable functions from good old 19th century analysis? We need urgent improvements in the intellectual honesty department.
You are badly conflating some issues here. How to implement automatic memory management is a runtime design issue. How to enforce proper non-memory resource management is a language design issue. Nothing forbids an implementation of a safe-Rust-like language with a garbage collector. Destructors would still be called deterministically, and destructed objects would still be unusable afterwards, as mandated by the language's semantics. But memory will only be reclaimed during the next garbage collection cycle.
There are other (better!) reasons against cross-language interoperability, though, such as the reduction in static guarantees to an unusable lowest common denominator.
To give a perhaps odious but relatable analogy, a form is a list of questions, but a filled form is not a list of answers - it is a record of answers to the questions in the form.