Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With strong, expressive type systems such as those offered by Haskell, Rust, TypeScript, etc... I find that you front-load all of your debugging to compile/typecheck time. Instead of needing to experiment with your code at runtime through either manual or automated (TDD) tests, you are instead having a conversation with the compiler/typechecker to statically guarantee its correctness. There's just as tight a feedback loop with a typechecker as there is with a test runner; you in fact get your feedback even sooner during compile time, instead of during the subsequent runtime.

Where static verification against a typechecker shines over runtime verification against a test suite is that a test suite can only demonstrate the presence of bugs; type systems demonstrate their absence (presuming of course that you understand how to encode certain invariants into your types and don't do some foolishness such as simply asserting all values as `any`).



While I prefer expressive type systems by a long shot, I would be much more careful about it "guaranteeing correctness".

Types can act as good documentation and as a safeguard for stupid mistakes. But the worst bugs are due to logic mistakes, wrong assumptions or non-foreseen corner cases. Here, either types do not help, or designing the type system is so difficult it is not worth the effort, and makes many future changes more difficult.

In my previous company we used Scala (with and without Spark) for everything, and this setup pretty much allows you both extremes. There was always a middle ground to be found, where types were expressive enough that they were useful, but not too much that they came in the way.


> While I prefer expressive type systems by a long shot, I would be much more careful about it "guaranteeing correctness".

Yeah, you're not guaranteeing correctness. There's a quote from automated testing discussions that applies here...

> You're not proving the system works. You're proving that system isn't broken in specific ways

Likewise, for a type system, it's guaranteeing the system is correct for the specific subset of "ways it can be incorrect" that the type system covers.


Just encode your business logic in types first! Coq, Idris, or F* will certainly get the job done for you!

/s


Yes, you do if you want to make money on decades timescales instead of some grifter vc 2 year thingy.


Unfortunately you end up selling your stuff to people building missiles and bombs that way—witness CompCert and Frama-C.


Not us, but yes, you have a point.


It guarantees certain correctness it is having conversations with you about - this way more correct


Type systems in languages like Haskell or Rust are very very very far from being able to "guarantee correctness". They can only realistically be used to specify extremely basic properties of your program ("doesn't have side effects", "doesn't write memory concurrently", this sort of thing).

For any more interesting properties (say "this function returns a sorted version of the input list", or "this function finds the smallest element in the set", or "this transaction is atomic"), you need something like dependent types, and that comes with a hell of a lot more work.


I would like to see improvements in the speed of feedback - particularly from language servers - but the value of those 'basic' guarantees is more than worth the current cost. Unexpected side effects are responsible for almost every trip I've taken with a debugger in any large Java or C++ project I've ever worked on.


I can remember about 20 years ago a colleague getting quite frustrated that a bug he had been looking at for quite a long time came down to someone doing something bizarre in an overloaded assignment operator in C++.


I've seen methods with names like "get_value()" have extensive side effects.

No type system can fix bad programming.


Of course I think we have all seen horrors like that - what I remember was his completely exasperated response not the technical details of the bug.


Complexity is mostly exponentiellt worse in the unknowns and you can not graph what you already know.

The point in the article is that when we read code we need another visualization to change or mental model. I can scan code and find most bugs fast, but when you are stuck a complexity by row/column sure would be handy to find overloaded assignments.


You're missing the most basic utility they provide... that of making sure other code is calling the function with the right types of arguments. That's a lot of coverage over a language without a compile type checked type system.


That's not a utility in itself, it depends on what the types represent to know if this is a useful property or not. For example, a Cfunction which is declared as "void foo(int a)" does ensure that it's called with an int, but if it's body then does "100/a", calling it as foo(0) is allowed by the compiler but will fail at runtime. It's true that that the equivalent Python function (def foo(a)) can fail at runtime when called as foo(0), but also foo("ABC"), but it's a matter of degrees, not kind.


Fair.

However, most people are using stuff like JS and Python. For them even the non-dependent type systems are an improvement.


I agree that one should refrain from ever using "guarantee correctness" in context of type systems outside of Coq & co. But "extremely basic properties" is IMO similarly exaggerating in the other direction.

Take the "basic" property "cannot be null" for example - Considering the issues and costs the lack of that one incurred over the decades, I'd call that one damn interesting.

And Rust? C'mon, its affine type system is its biggest raison d'etre.


I do not consider neither TDD or tests being about finding or solving bugs. They are about regression and refactoring safety. They are my guardrails for when I must change or add things, or need to discover how something works.

The rest of your comment I found to be a really good point in terms if feedback justification. The IDE checking your code before compile or runtime is faster than both. Good point.


Tests for me also help me write better code. When writing tests, I'm forced to switch from "how do I implement the behavior I want" to "how can this fail to do the right thing". Looking at the code from _both_ of those mindsets helps me end up with better code.


Where I see this fall down, is when you aren't able to learn from the partial code along the way. The sooner you get an end to end setup running where input to the system causes a change to the output from it, the better you are for this sort of feedback. Note, not the soonest you get code to output. The soonest you get users giving input to users getting output.

If you are able to internalize everything, you are constantly simulating expectations in your head on what you are coding. Seeing where your expectations fall down on outputs is a valuable thing.

So, yes. If you fully understand everything already, the "paying it upfront" cost of exhaustive types is good. Amazing, even. Until you get there, you are almost certainly pushing off the feedback of learning where you do not fully understand the system.


I feel like you can partly get around this by slowly increasing type specifically over time. With strong type checking the risk of refactoring is low.


Certainly, but that goes a bit against the idea of incredibly strong types that people often visualize in their mind.

Irony being what it is, most strongly typed programs that I have been introduced to were incredibly tight knots that were not easy to refactor. Many of the restrictions in the types would be far too strong from what was needed by the program, and refactors grow in difficult to explain ways.

This is all to say, the discourse here is fraught with nobody acknowledging that "well done" programs of near any paradigm/style are well done and work. Evidence is often used that languages that allow looser ideas are more numerous than those that don't. This ignores that lack of existing programs in the strongly typed world could also be lack of ability for many people to deliver using those practices at all. Which, in turn, ignores that that may be a trade off that is worthwhile in some industries. (I suspect it goes on.)


CL is a strong and often statically typed language; there are more expressive, even Haskell like implementations (https://coalton-lang.github.io/20211010-introducing-coalton/) with instant feedback, robust (old; underestimated how robust things get when some kid didn’t roll your npm yesterday). And yep, an expressive type system, like ts, is often turing complete so it can hang, but that’s not what I am talking about; trivial ts is incredibly slow even with type checking off; now non trivial ts is a joke. Why so many fans while no one can show even 1 example that is not slow?


Eventual or gradual typing could leave everyone happy.

On the premise of the article, maybe the key to representing a program visually is a very expressive (and strong) type system. There could be a way to derive some Visual Types from good old regular types, and diagram the visual types in any level of granularity one desires.


Instead, gradual typing seems to always make everybody as unhappy as they can get.

Just like visual programming, it looks like we are doing gradual typing very wrongly.


If Typescript counts as gradual typing, which i think it does, then many seem to be very happy using it. I have skimmed over more than 1000 blogs of HN, didn't see any posts about disliking Typescript, and many people use it.

Doesn't Typescript offer seamless interoperability with vanilla Javascript?


I am very much of eventual static typing and even proofs for some parts; cl is pretty good and we have a gradual type system in our company for cl. But it’s just faster and easier to build it first and add types later we found (our company is almost 40 years old now).


If you use ocaml you get near instant compile times and types, which is excellent for quick feedback.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: