Hacker Newsnew | past | comments | ask | show | jobs | submit | bitcompost's commentslogin

Your definition of reliability seems different to how people use the word. I think most would consider a program that was statically checked, but often produces a wrong result as less reliable than a dynamically checked program that produces the right result.

>My argument is that in the totality of possible errors, statically typed programs have provably LESS errors and thus are definitionally MORE reliable than untyped programs. I am saying that there is ZERO argument here, and that it is mathematical fact. No amount of side stepping out of the bounds of the metric "reliability" will change that.

Making such broad statements about the real world with 100% confidence should already raise some eyebrows. Even through the lens of math and logic, it is unclear how to interpret your argument. Are you claiming that sum of all possible errors in all runnable programs in a statically checked language is less than sum of all possible errors in all runnable programs in an equivalent dynamically checked language? Both of those numbers are infinity, although i remember from school that some infinities are greater than others, I'm not sure how to prove that. And if such statement was true, how does it affect programs written in the real world?

Or is your claim that a randomly picked program from the set of all runnable statically checked programs is expected to have less errors than randomly picked program from the set of all runnable dynamically checked programs? Even this statement doesn't seem trivial, due to correct programs being rejected by type checker.

If your claim is about real world programs being written, you also have to consider that their distribution among the set of all runnable programs is not random. The amount of time, attention span and other resources is often limited. Consider the act of twisting an already correct program in various ways to satisfy the type checker, Consider the time lost that could be invested in further verifying the logic. The result will be much less clear cut, more probabilistic, more situation-dependent etc.


I think the disagreement here comes from overcomplicating what is actually a very simple claim.

I am not reasoning about infinities, cardinalities of infinite sets, or expectations over randomly sampled programs. None of that is needed. You do not need infinities to see that one set is smaller than another. You only need to show that one set contains everything the other does, plus more.

Forget “all possible programs” and forget randomness entirely. We only need to reason about possible runtime outcomes under identical conditions.

Take a language and hold everything constant except static type checking. Same runtime, same semantics, same memory model, same expressiveness. Now ask a very concrete question: what kinds of failures can occur at runtime?

In the dynamically typed variant, there exist programs that execute and then fail with a runtime type error. In the statically typed variant, those same programs are rejected before execution and therefore never produce that runtime failure. Meanwhile, any program that executes successfully in the statically typed variant also executes successfully in the dynamic one. Nothing new can fail in the static case with respect to type errors.

That is enough. No infinities are involved. No counting is required. If System A allows a category of runtime failure that System B forbids entirely, then the set of possible runtime failure states in B is strictly smaller than in A. This is simple containment logic, not higher math.

The “randomly picked program” framing is a red herring. It turns this into an empirical question about distributions, likelihoods, and developer behavior. But the claim is not about what is likely to happen in practice. It is about what can happen at all, given the language definition. The conclusion follows without measuring anything.

Similarly, arguments about time spent satisfying the type checker or opportunity cost shift the discussion to human workflow. Those may matter for productivity, but they are not properties of the language’s runtime behavior. Once you introduce them, you are no longer evaluating reliability under identical technical conditions.

On the definition of reliability: the specific word is not doing the work here. Once everything except typing is held constant, all other dimensions are equal by assumption. There is literally nothing else left to compare. What remains is exactly one difference: whether a class of runtime failures exists at all. At that point, reliability reduces to failure modes not by preference or definition games, but because there is no other remaining axis. I mean everything is the same! What else can you compare if not the type errors? Then ask the question which one is more reliable? Well… everything is the same except one has run time type errors, while the other doesn’t… which one would you call more “reliable”? The answer is obvious.

So the claim is not that statically typed languages produce correct programs or better engineers. The claim is much narrower and much stronger: holding everything else fixed, static typing removes a class of runtime failures that dynamic typing allows. That statement does not rely on infinities, randomness, or empirical observation. It follows directly from what static typing is.


I recommend doing some experiments before considering no register allocation unbearingly slow. I once tried running Gentoo with everything compiled -O0 and the user experience with most software wasn't significantly different. The amount of performance critical C code on a modern PC is surprisingly low. Stuff like media decoding is usually done in assembly.


> I recommend doing some experiments before considering no register allocation unbearingly slow. I once tried running Gentoo with everything compiled -O0

AFAIK, register allocation is one of the few optimization passes which are always enabled on all compilers, even with -O0, so your experiment proves nothing.


It's decided by function use_register_for_decl in gcc: https://github.com/gcc-mirror/gcc/blob/releases/gcc-12/gcc/f... With -g -O0 register is only used in special cases like using the register keyword.

The memory accesses are also easily visible by disassembling the compiled binary. Performance of resulting binary at -O0 is also rougly similar to performance of binary produced by Tiny C Compiler, which doesn't implement register allocation at all.


While I love the idea of self-hosting HW and SW, I can't even imagine the pain of building stuff like GCC on 60Mhz CPU. Not to mention the Rocket CPU is written in Scala. I recently stopped using Gentoo on RockPro64, because the compile times were unbearable, and that's a system orders of magnitude faster than what they want to use.


You can definitely go considerably faster. A lot of these FOSS cores are either outright unoptimized or target ASICs and so end up performing very badly on FPGAs. A well designed core on a modern FPGA (not one of these bottom of the barrel low power Lattice parts) can definitely hit 250+ MHz with a much more powerful microarch. It's neither cheap nor easy which is why we tend not to see it in the hobby space. That, and better FPGAs tend not to have FOSS toolchains and so it doesn't quite meet the libre spirit.

But, yes, even at 250MHz trying to run Chipyard on a softcore would certainly be an exercise in patience :)


People used 50Mhz SPARC systems to do real work, and the peripherals were all a lot slower (10mbps Ethernet, slower SCSI drives) with less and slower RAM. But it might take a week to compile everything you wanted, I agree; of course there is always cross-compiling as well.


That was before everything became a snap package in a docker image.


> That was before everything became a snap package in a docker image.

A modern app should consist of dozens of of docker images in k8s on remote cloud infrastructure, all running "serverless" microservices in optimized python*, connected via REST* APIs to a javascript front-end and/or electron "desktop" app, with extensive telemetry and analytics subsystems connected to a prometheus/grafana dashboard.

That is ignoring the ML/LLM components, of course.

If all of this is running reliably, and the network isn't broken again, then you may be able to share notepad pages between your laptop and smartphone.

*possibly golang/protobufs if your name happens to be google and if pytorch and tensorflow haven't been invented yet


Oh I believe in theory a 50Mhz CPU is capable of doing almost everything I need, but it just lacks the software optimized for it. I think a week to compile everything is too optimistic.


Old compilers/IDEs like Turbo Pascal or Think C were/are usably fast on single-digit MHz machines and emulators.

And even if the CPU is 50 MHz, modern DRAM and NVMe flash are very fast compared to memory and storage on 1990s (or older) machines.

Older versions of Microsoft Office (etc.) ran about the same on 50 MHz systems as Office 365 runs today.


I did valuable work on a 2 MHz Apple II with a 4 MHz Z80 add-on running CP/M that I used to write the documentation. The documentation part was just as fast forty years ago as it is now but assembling the code was glacially slow. The 6502 macro assembler running on the Apple too forty minutes to assemble code that filled an 8 k EPROM.


6502 assemblers are amazingly fast on more recent hardware. Something like 60-70ms to run a script to assemble and link an a version of msbasic (AppleSoft) on my old laptop.

https://github.com/mist64/msbasic


I usually only notice typos after hn has disabled editing... ;-(


> I can't even imagine the pain of building stuff like GCC on 60Mhz CPU

Some of us remember what that sort of thing was like, not so very long ago...


I remember when I got CodeWarrior on my PowerMac 6100/60 and suddenly I could answer questions online about weird MacApp problems by making a temporary project with their code and compiling the whole of MacApp in 5 minutes.

Previously that had taken about 2 hours (Quadra with MPW), and I did clean builds only when absolutely necessary.

Truly painful was trying to write large programs in Apple (UCSD) Pascal on a 1 MHz 6502.


Back then GCC was much smaller, and only contained C code, not C++. But sure, let's compare apples and ... much bigger heavier apples.


I made a meme and sent it to my even older coworker with two guys from the office looking pensive. Titled 'The Build Failed Saturday and Again Sunday Night'


At one time many of us dreamed of having a computer that could run as fast as 60MHz. The first computers I used ran around 1MHz. Compilation will take longer on a slower machine, but that really isn't a big deal. If the computer is reliable and the build scripts are correct, you can just let the process run over days or weeks. I've run many tasks in my life that took days or weeks. Cue "compiling": https://xkcd.com/303/

The real problem is debugging. Debugging the process on a slow system can be unpleasant due to long turn-arounds. Historically the solution is to work in stages & be able to restart at different points (so you don't have to do the whole process each time). That would work here too. In this case, there's an additional option: you can debug the scripts on a much faster though less trustworthy system. Then, once it works, you can run it on the slower system.


You might be interested in Arcan desktop engine ( https://arcan-fe.com ), which has a tui api for clients. It has been used to build an interesting experimental shell: https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lash...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: