Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.

You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence. With Haskell, I can be reasonably confident about my code with very little tests.

> but I'd wager that it wouldn't be as readable as the equivalent written as a test.

Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.

> That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention

I think Scala users will generally disagree with you. They'd prefer it if null was ruled out in the language itself. That said, Go convention is to use nulls, not shun them.

> I don't find myself writing much safer code when I go from JS to C++. ... Ruby to Scala ...

Your code is much safer simply by construction, so I am not sure what you mean here.

> You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me

Hitting type errors at runtime, null dereference crashes in Java and "NoneType has no attribute `...`" in Python is pretty common IME.

I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.



> You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence.

This is not true. In the example I spoke about above, if I take a CacheClient, and a ServiceXClient when my type is being constructed, assign them to local fields and then never modify that field again, then I don't need to exercise every dereference of these fields, just one. And again, I don't test that my code handles NPEs, I test that my code does what it is supposed to, and in the process of doing that, NPEs get flushed out.

> Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.

I think you are viewing this through red-black-tree colored glasses. Specifically, you believe that a lot of code has mathematical constraints the way that example did. To me, this is an extremely remote possibility. I think if you tried to encode even the smallest real-world example of this, say a service implementing a URL shortener, you would run into a wall.

> Your code is much safer simply by construction, so I am not sure what you mean here.

I should have said more reliable.

> I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.

Truly, it baffles me that people still harp on the reliability aspect. It is quite likely that every piece of software you use day-to-day is written in a language with nullability, without pattern matching, and no sum types. Most of that software probably doesn't even have memory-safety (gasp!). Probably every website you visit is in the same sorry state. I'm sorry, but your arguments would be far more convincing if the world written in these languages were a buggy, constantly crashing hell. It's not.


I guess to progress from here we'd need to laboriously compare actual example pieces of code. For example a URL shortener is going to be easier to write safely in Haskell, where I am guaranteed by the type system not to have 404 errors in routes I advertise, or XSS attacks.

Also, in my experience, computer software is buggy, unreliable, crashing, and generally terrible. I think people who view software differently have simply grown accustomed to the terribleness that they can't see it anymore.

Also, reliability is interchangable with development speed. That is, you can trade one for the other. So if you start with a higher point, you can trade more for speed and still be reliable. In unreliable language, typically reliability is achieved by spending more time maintaining test code, doing QA, etc. In a reliable language more resources can be spent doing quicker development, and less on testing and QA.

When you see a reliable project implemented using unreliable technology, you know it's going to scale poorly and require a lot of testing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: