Half of his clever quotes are completely bonkers though and have been disproven by history. How many of you are proving your programs correct before entering them into the computer? Because that is the only correct way to program. And remember he disparaged Margaret Hamiltons software methodology. Sure, she helped put a man on the moon, but apparently she did it the wrong way.
I suspect geeks like Dijkstra because he is "edgy" more than because he is correct.
Also, Object-Oriented programming was actually invented in Norway, even though Alan Kay of Smalltalk fame tried to take credit.
He was right about GOTO though, but many developers did not even understand his argument but just read the headline and concluded "GOTO bad".
I like him because he is outspoken rather than because he is edgy.
I don’t know how amenable he was to his arguments possibly being incorrect, but I love working with/knowing people who are that combination of outspoken and not-overly-stubborn. People like that are a firehose of ideas and knowledge, even if not everything they say is correct. They also usually are “passionate” about their work and at least competent enough to have unorthodox opinions that don’t just sound blatantly stupid.
Most people are too timid or low-ability to be outspoken at all.
The GOTO-paper is widely misunderstood though. It is making a case for blocks and scopes and functions as structures which makes it easier to analyze and reason about the execution of complex programs. The case against unconstrained GOTO follows naturally from this since you can't have those structures in combination with unconstrained GOTO.
So in short, bitwise operators have lower precedence than comparisons to allow you to write:
if (a==b & c==d) ...
but of course, this means you can't write bitwise checks like this:
if (addr & mask == 0) ...
The problem could theoretically have been solved when the shortcut operators were introduced, by increasing the precedence of & and | to be higher than comparisons, but have the shortcut operators be lower. So you would be able to write both:
if (a==b && b==c) ...
if (addr & mask == 0) ...
But this was not done due to concerns of backward compatibility with existing code, since now every expression using the old pattern would subtly change semantics. E.g. the first example would now be parsed as:
> you could write [...] but of course, this means you can't write
I found this rather difficult to read. You could write those expressions. They're legal C code. Whether they will have the expected semantics will depend on, well, what you expected.
The more general problem is code that relies too heavily on precedence rules in the first place. Precedence-related bugs and readability issues are easily avoided, just use parentheses. As I mentioned in another comment in this thread, some languages force the programmer to do this.
I said code that relies too heavily on precedence rules. Your example doesn't do so.
In another comment [0] I mentioned that the Ada and Pony languages force the programmer to use parentheses when the expression would otherwise be confusingly reliant on precedence rules. Neither language requires unwieldy overuse of parentheses.
This C programming style advice article similarly recommends a middle-ground approach. [1]
I agree that unnecessary syntactic noise is bad (although this is essentially true by definition, as it's always a derogative). It can harm readability and make bugs more likely.
I always thought using the bitwise operator as if it were a logical operator was simply a mistake, even though it works because false is 0 and true is 1.
Edit: Mea culpa for reading and responding to the comments before the article.
I think it would be adopted exactly as widely as JavaScript is. People choose JavaScript because it is supported by the browser, not because they think it has a beautiful syntax. If some other language has been supported instead (whether VBScript or Scheme or whatever), people would use that.
Semantic markup languages were niche until the web happened. Objective-C was a weird niche language until the iPhone app boom. People learn the languages they need to learn.
Netscape had market dominance at the time so Internet Explorer had to keep bug-for-bug compatibility with Netscape to keep up. Given the amount of bugs and the lack of documentation of JS, this must have been quite frustrating for MS who was used to calling the shots.
It’s simple really: write libraries and frameworks yourself and use them for real projects. It will give you a great understanding of what makes a quality library/framework and make you a better programmer.
Are you perchance working in academia? Your suggestion seems to assume unlimited time and resources to write everything yourself, while the purpose of using third-party components usually is to save time and resources.
Think about it as a hobby. Most people have time for hobbies. The rest spend way too much time on social media or watching YouTube. Manage your time better.
Thank you for the advice which is no doubt well-intentioned.
But don't kid yourself. If you use your spare time for research and development to solve problems at work, then it is not really a hobby, it is just you working overtime for free. If that makes you happy, good for you, but it doesn't change the fact that time and resources are limited. The time is just cheaper in your case.
GOTO does not pass a continuation though, so I can't see how it is similar to continuation-passing style. GOTO is just a jump like jumps in machine code.
BASIC does not have notion of a continuation as a value.
Some BASICs such as BBC BASIC allowed GOTO to take a variable or expression directly, and many BASICs supported the ON..GOTO construct that allows for the same. And turning all control flow within a program into jump-like operations is arguably the main point of CPS.