The article’s position comes down to “no fundamentally new way to program would make sense for today’s programmers to switch to it”, and gives examples like the platforms of the no-code movement.
From previous generational leaps, we’ve learned that the users post-leap don’t look like the pre-leap users at all. The iPod’s introduction brought about a generation of new digital music users that didn’t look like the Limewire generation, and the iPhone’s average user didn’t look like the average user of the BlackBerry before it.
Modern programming is at the core of HN, and of most of SV, sure. That said, we should still be the first to realize that a successful, fundamentally new way to program would target a new generation and idea of software maker, one that won’t look like the modern developer at all.
Exactly. A paradigm shift implies new mental models and new metaphors for our abstractions that might not be valuable to people who think our current abstractions serve us well.
A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code. The Unison language works directly with an AST that is modified from a scratch file[0]. For people committed to new models of distributed computing, that makes sense; for everyone else, it might be seen as an idea that messes with their current tooling and changes existing and familiar workflows.
I think the really big leaps forward are going to go well beyond this and they will look like sacrilege to the old guard. New programmers don't care if a programming language is Turing complete or if the type system has certain properties, they only care about working software but existing programmers are dogmatic about these concepts. I think the next leap forward in programming is going to offend the sensibilities of current programmers. Having to break with orthodoxy to get a job done won't worry people who don't know much about programming tradition to begin with.
Perhaps. Or it'll be like civil engineering or something, where the fundamental principles really stay similar even as technology/theory dramatically improves.
“I think the next leap forward in programming is going to offend the sensibilities of current programmers.”
Honestly, programmers have been railing against progress ever since the first machine coders shook their canes at those ghastly upstart programming languages now tearing up their lawns.
Meanwhile, what often does pass for “progress” amounts to anything but:
> we still use the metaphor of files and folders for organizing our source code
We don't use the metaphor for storing things. What we use is an hierarchical naming scheme. This makes sense for a number of use cases, and has been independently discovered multiple times through the short history of computing.
You may call the nodes files and folders. It is however just a word, a parable for the underlying data structure, which is the physical reality. You could just as easily call it something else. And many people, whose first language is different from yours, probably does.
Wow, thanks for sharing Unison, seems super interesting! I've been thinking about content addressed code compilation lately that could allow one to have all versions of a program within a single binary. Apparently there are other benefits to it. Can't wait to learn what they have discovered!
I've played a little bit with Unison and it's definitely very interesting and a little bit of a new paradigm (some people compare their "images" to Smalltalk images, but I think they differ enough to be considered as distinct paradigms)... but they're still working on very basic things, like how to enable people to do code reviews when the committed code is an AST, not just text... and how to actually distribute such software (I asked in their very friendly chat but apparently you can't run the code outside the interpreter for now, which I think is written in Haskell)... also, the only help you can get writing code is some syntax highlighting, even though the ucm CLI can display function docs and look up functions by type, for example, similar to Haskell's Hoogle (but in the CLI!!). ... so just be aware it's very early days for Unison (and they do make that clear by "forcing" you to join the #alphatesting Slack chat to install it, which is a great idea IMO as it sets expectations early).
Re: "A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code."
I agree 100%! Trees are too limiting. I'm not sure we need entirely new languages to move away from files, we just need more experiments to see what works and what doesn't, and add those features to existing languages & IDE's if possible. I don't like the idea throwing EVERYTHING out unless they can't be reworked. (Files may still be an intermediate compile step, just not something developers have to normally be concerned with.)
I believe IDE's could integrate with existing RDBMS or something like Dynamic Relational, which tries to stick to most RDBMS norms rather than throw it all out like NoSql tried, in order to leverage existing knowledge.
Your view of source code would then be controlled by querying (canned and custom): bring all of aspect A together, all of aspect B together, etc. YOU control the (virtual) grouping, not Bill Gates, Bezos, nor your shop's architect.
Most CRUD applications are event driven, and how the events are grouped for editing or team allocation should be dynamically determined and not hard-wired into the file system. Typical event search, grouping, and filter factors include but are not limited to:
* Area (section, such as reference tables vs. data)
* Entity or screen group
* Action type: "list", "search", "edit", etc.
* Stage: Query, first pass (form), failed validation, render, save, etc.
And "tags" could be used to mark domain-specific concerns. Modern CRUD is becoming a giant soup of event handlers, and we need powerful RDBMS-like features to manage this soup using multiple attributes, both those built into the stack and application-specific attributes/tags.
Then why hasn't this happened over the past 40 years? That's more than one generation of programmers over a whole lot of change from mainframes to PCS, the web, mobile devices and cloud services with thousands of programming languages and tools being invented over that time, but mostly it's incremental process. PLs today aren't radically different than they were in the 60s. It's something visionaries like Alan Kay have repeatedly complained about.
New paradigms emerge when people think differently about the problems to solve or try to solve new problems. It makes sense to me that it might take more than one generation of people working on a similar set of problems before we have significantly different solutions.
> A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code.
I think there's something akin to a category error here.
First, let's agree that we do want to organize our source code to some degree. There are chunks of source code (on whatever scale you prefer: libraries, objects, concepts etc) thare are related to each other more than they are related to other chunks. The implementation of "an object" for example consists of a set of chunks that are more closely related to each other than they are to any chunk from the implementation of a different object.
So we have some notion of conceptual proximity for source code.
Now combine that with just one thing: scrolling. Sure, sometimes when I'm working on code I want to just jump to the definition of something, and when I want to do that, I really don't care what the underlying organization of the bytes that make up the source code.
But scrolling is important too. Remove the ability to scroll through a groups of conceptually proximal code chunks and I think you seriously damage the ability of a programmer to interact in fundamentally useful ways with the code.
So, we want the bytes that represent a group of conceptually proximal code chunks to be scrollable, at least as one option in a set of options about how we might navigate the source code.
Certainly, one could take an AST and "render" some part of it as a scrollable display.
But what's another name for "scrollable bytes"? Yes, you've guessed it: we call it a file.
Now, rendering some subset of the AST would make sense if there were many different ways of putting together a "scroll" (semantically, not implementation). But I would suggest that actually, there are not. I'd be delighted to hear that I'm wrong.
I think there's a solid case for programming tools making it completely trivial to jump around from point to point in the code based, driven by multiple different questions. Doing that well would tend to decouple the programmer's view of the source as "a bunch of files" from whatever the underlying reality is.
But ... I haven't even mentioned build systems yet. Given how computers actually work, the end result of a build is ... a set of files. Any build system's core function is to take some input and generate a set of files (possibly just one, possibly many more). There's no requirement that the input also be a set of files, but for many reasons, it is hellishly convenient that the basic metaphor of "file in / file out" used by some many steps in a build process tends to lead to the inputs to the build process also being files.
I wonder if these's a way to deal with these concerns with a different visual approach that's better than files. I haven't seen one, but am still curious. Jetbrains' IDEs code navigation (ctrl+b to go to definition etc) are a step in that direction, but ultimately, the scrollable areas are still separated into files, even though you can navigate between them more directly.
I wonder how much of this comes from how tightly "programming" has been defined as/synonymous with "writing code".
I have two family members who brought up much of their job was "custom formulas in excel". They would not call themselves programmers, but they'd learned some basic programming for their job.
I wonder how much "Microsoft Flow Implementer" will become its own job focus with more and more people getting access to Teams.
Former Limewire developer here. I definitely had an iPod mini prior to Limewire's peak years, counted by number of monthly peers reachable via crawling the Gnutella network.
That’s very interesting to note. I imagine that the popularity of the iPod led a lot of new people to Limewire before the iTunes Store and Spotify took off, pushing its true peak to be a lot later than most (including myself) might recall.
The “Don’t steal music” label on every new iPod might as well have been a Limewire ad.
We had an internal URL for a graph of daily sales. Sales definitely went up every time the major record labels put out a press release that they were planning to sue because of the amount of music available.
> What's your take on how everything works these days?
Mostly, I wish technologies to make unreliable P2P transfers more robust had been widely applied to point-to-point transfers. I wish my phone, for instance, used low-data-rate UDP (with TCP-friendly flow control and a low priority IPv6 QoS) with a rateless forward error code (such as Network Codes) to download updates. There's no reason an update download should just fail and start over if WiFi is spotty or you move between WiFi networks.
Power, CO2, and cost efficiencies of scale due to centralization are nice. The shift to mobile makes P2P more challenging, see Skype switching to a more centralized architecture to make mobile conversations more stable.
I wish we had somehow come to a point where users were incentivized to use P2P programs that marked P2P traffic using the IPv6 QoS field to flag P2P traffic, rather than relying on heuristics to try and shape traffic. Using heuristics to shape traffic incentivizes P2P traffic to use stenography and mimic VoIP or video chat, making everything less efficient. Monthly data quotas at different QoSes, after which all the traffic gets a low priority, would incentivize users to use programs that explicitly directly signal traffic prioritization to the routers.
Comcast's traffic shaping attempts using heuristics seem to have caused it to forge RST packets when Lotus Notes (mostly used by enterprises) downloaded large attachments, breaking attachment downloads.[0]
> Do you miss p2p?
Sometimes.
> Do you think we could ever get back to it?
I think that really depends on corporate censorship (with and without government pressure) trends in the near future, and how hard the average person wants to push back. I think P2P is unlikely to see a major resurgence any time soon.
I think a better criterium for "peak years" is peak momentum (user increase over time), vs peak users.
Most technologies reach peak users when they stop growing or when their growth stops accelerating (and then they either get somewhat stable -e.g. Microsoft, or like many, start to decline - e.g. Blackberry).
Limewire's peak years were immediately before settling a lawsuit. When the lawsuit threats started, the owner panicked, put out a press release saying that he would shut down the company right away, then back-peddled and announced he'd fight the lawsuit. There was a ton of noise in the press and lots of media speculation about what was going to happen, which generated tons of free advertising.
Part of the settlement was using the auto-update feature to update the vast majority of users to a record-label-developed application that was skinned to look like LimeWire. (I left a year or two before the killing update, but as I remember, they made one release that removed the optionality from the auto-update, waited for the majority of users to update, and then force-updated everyone.)
LimeWire's fall in popularity didn't look anything like a normal decline.
I don't think the media frenzy free advertising was intentional, but the owner was a bit of a mad genius. He has tons of ideas, 80% of which are batshit crazy, and 1% of which are out-of-the-box brilliant. He has a few people close to him who are good at picking out the uncut diamonds. He also founded and runs a very successful hedge fund. On the other hand, he emailed everyone a paranoid email and then talked to journalists about it when it leaked[0].
I agree with the overall point but I think you're using the wrong example. Music consumption didn't really change with the ipod, it changed with abundant mobile data that removed the need for locally-stored files (and hence their management). You can argue that the introduction of iTunes changed the game, which it did a bit, but imho mobile data is what fundamentally altered the field. Imho the move really was cd -> mp3 (filesharing/iTunes) -> streaming to mobile.
When I bought a 40gb music player in 2005, I stopped downloading songs and started downloading discographies of entire artists and labels. The change that came with streaming services wasn't the first one.
I'd download entire music libraries (from Soulseek where you can browse people's shared files). I'd look up something I liked and assume someone who likes it has good taste :) And download whatever else seemed interesting. Thus my iPod became a vehicle for discovering new music.
I think the next paradigm shift in programming is the shift from local to cloud IDEs.
I see a lot of backlash against that idea these days, but it seems inevitable.
I don't think we can predict the full consequences of that, but one I see already is massively lowering friction. If the cloud knows how to run your code anyway, there's no reason why the fork button couldn't immediately spin up a dev environment. No Docker, no hunting for dependencies, just one click, and you have the thing running.
The next generation of programmers (mostly young teenagers at this point) is often using repl.it apparently, and building cool stuff with it. This is definitely promising for this approach, as the old generation will pass away eventually.
I work with some folks who use Brewlytics (https://brewlytics.com/). It's basically a way to use logical modeling to automate tasks, actions, and pull and push data for said automation. It's parallel to programming - these folks are using iterators, splitting and recombining fields, creating reusable parts out of smaller parts. They basically ARE programming, but almost none of them know anything more about programming than Hello World in Python.
We don't call people working in excel programmers though, not even themselves do that. That is the thing, we create a ton of wonderful no/low code tools, but then we create different jobs from programming since the programming is no longer the hard part the role is no longer a programmer.
IMHO programming language design is (or at least should be) guided by the underlying hardware. If the hardware dramatically changes, the way this new hardware is programmed will also need to change radically. But as long as the hardware doesn't radically change (which it didn't so far for the last 70 years or so), programming this hardware won't (and shouldn't) radically change either. It's really quite simple (or naive, your pick) :)
Yes indeed, and after had hit the reply button I was thinking about GPUs. But if you look at how a single GPU core is programmed, this is still served pretty well with the traditional programming model, just slightly enhanced for the different memory architecture.
With "radical changes" I mean totally moving away from the von Neumann architecture, e.g. "weird stuff" like quantum-, biochemical- or analog-computers.
> That said, we should still be the first to realize that a successful, fundamentally new way to program would target a new generation and idea of software maker, one that won’t look like the modern developer at all.
Channeling Gibson[1], do you see any potential successors already out there?
[1] “The future is already here – it's just not evenly distributed."
Real leaps can be distinguished from hype by where the passion is coming from. The fact that the movement’s passion is coming from actual, paying users and not just no-code platform makers is key here.
It’s rapidly creating a new generation of software creators that could not create software before, and it’s improving very, very fast.
I notice, though, that your examples are not from programming at all. Your examples are about users of devices. True, programmers use languages, but programming is far more complicated than using a music service.
Something like "no code" may make programming easier... until it doesn't. That is, you get to the point where either you can't do what you need to do, or where it would be easier to do it by just writing the code. If the "no code" approach lets you write significant parts of your program that way, it may still be a net win, but it's not the way we're going to do all of programming in the future.
"I notice, though, that your examples are not from programming at all. Your examples are about users of devices. "
Just to level set - as a program manager when I engage with programmers it's not because I want to buy programmers.
I want the fruits of their labors.
Let me put it another way - programmers love to bemoan the way users abuse Excel. Users abuse Excel because it meets their needs best, given all other factors in their environments.
If things like no code environments progress where they can provide at a minimum the level of functionality Excel can for many tasks then it will take off. No, it won't be "all of programming" but enough to be a paradigm shif?
OK, take Excel. It provided a way for a lot of non-programmers, who didn't want to become programmers, to program enough to get their work done. And that's great!
But if you look at a graph of the number of people employed as programmers, and you look for the point where that number started to decline because Excel made them unnecessary, well, you don't find it. Excel made simpler stuff available for simpler problems, but it didn't address bigger problems, and there were plenty of bigger problems to go around.
And when we talk about fundamentally improving programming, we aren't talking about improving it for those trying to solve Excel-level problems. (That's still worth doing! It's just not what we're talking about.)
So if you can create a new Excel for some area, it will take off. And that's great, for the people who can use it. It won't be all of programming, but it will be a paradigm shift for those who use it.
Will that be a paradigm shift for all of programming? Depends on how many people use it. My bet would be that there is no Excel-like shift (or no-code shift) in, say, the next 20 years, that will affect even 30% of what we currently recognize as programmers.
(If you introduce a great no-code thing, and 10% of current programmers shift to use it, and a ton of newcomers join them, that still only counts as 10% by my metric, in the same way that we don't really count Excel jockeys as professional programmers.)
Generational leaps emerge in the same ways everywhere.
For any space, if you provide a large enough net win for a large enough number of people, you introduce a generational leap. Very often, those people are completely new to the space.
The measure here isn’t how many growing companies that started with no-code adopt code as they grow. The measure here is how many growing companies that started with no-code wouldn’t have been started otherwise.
This claim is resting upon flimsy metaphors only. Of course, tautologically, each new wave of tech has some differences in demographics, and old people are slow to learn new paradigms, and generations differ, BUT ultimately we have no idea if/when there will be a new wave and how different its users will be. It might very well happen that the demographics don’t change as much, as the programming profession is already one of the most fragmented and eclectic, and attracts people whose primary virtue is manipulating logical abstractions.
I think you are taking the weakest possible extrapolation of the article's position and attacking that.
This article is about changing techs for an existing product. And the author is correct; tech changes are very costly for existing products. You have to weigh the cost of the rewrite. Swapping out your markdown parsing library is probably relatively low-cost. Swapping out your web framework is potentially years of work for no practical gain.
Most of us aren't working on new things. Day 2 of a company's existence, you already have legacy code and have to deal with things that were built before.
Removing barriers of entry comes with its own problems. Today we see that horrible error-prone excel sheets that are created by non-programmers wasn't a great idea at all. Similarly, many web developers don't understand performance and we end up with bloated sites / electron apps.
I think lot of progress will be incremental. Seemingly "revolutionary" ideas like light table break on modestly real world stuff. Function programming is elegant and all unless you hit a part of problem fundamentally imperative or if there's a performance problem. I think programming progress will be incremental, just as industry continues to mature.
>Function programming is elegant and all unless you hit a part of problem fundamentally imperative or if there's a performance problem.
Most functional languages allow you to do imperative stuff. so this is not an issue. They just usually provide an environment where the defaults guide you to functional (immutable by default, option/result types instead of exceptions, making partial application of functions and piping easy, etc.).
A prime example would be F#. You can program pretty much the same as in C# if you need to, but there are a lot of facilities for programming in a more functional style.
While the author speaks about difficulties due to momentum, habit, and baggage, I think the fundamental problem is much, much harder.
Many have described the problem in different ways, from Fred Brooks's essential vs. accidental complexity to results on the intrinsic "hardness" of programming we've learned from computational complexity theory in the past two decades, but my favourite is the succinct description Gilles Dowek gave at a talk at NASA [1]:
> Computers have been invented to surprise us... If we knew what computers do, we would not use them, and we would not have built any.
In other words, not only is writing the programs we want hard because of some "reasonable" Brooks-like argument or some rigorous computational-complexity argument, but we write programs in the first place because it is hard.
In practice, not only has Brooks's prediction about no 10x improvement in programming productivity due to a single development -- which was rejected by many at the time as too pessimistic -- been vindicated, but the reality is worse: we have not seen a 10x boost due to all developments combined and in over 30 years! And the boosts we have seen are mostly due to the availability of open-source software, and widespread knowledge sharing on the internet.
> Computers have been invented to surprise us... If we knew what computers do, we would not use them, and we would not have built any.
This is just trying to sound clever for the sake of sounding clever. The first bit is factually wrong in any non-trivial way, and the counter argument for the second bit is the very next slide.
As a formal methods researcher, he expresses in a clever way what formal method researchers know: it is extremely hard to establish interesting non-trivial properties of programs. His cleverness is pointing out that this is intentional: if we could easily tell all interesting properties of the results our programs give us, we wouldn't need them in the first place (of course this isn't always true, but don't be so literal). It's another way of saying that software exists for its essential complexity, and that that complexity -- as formal methods/software analysis research tells us -- is high.
> if we could easily tell all interesting properties of the results our programs give us, we wouldn't need them in the first place
At least to a degree, that may be our own hubris instead of an essential quality of programming. Rarely do I encounter a program whose problems are in their essential complexity. What I see instead is people convincing themselves that all of their pain is necessary.
We have a lot of architectural astronauts who seek complexity for its own sake. We have feature factories adding new complexity all the time. Code being written to justify code still being written - programming bureaucracies. Like moths to a flame we reach for complexity. And we reach, and we reach, and we reach.
I have spent a lot of my career working to scale developers vertically. In software, when communication is the bottleneck, we either fix it and keep scaling horizontally, or can’t and work to scale our hardware vertically. Achievements in developer communications have been rare, and yet we keep young to scale horizontally like we don’t already know how this story ends. Dusty, old, 25th anniversary editions of Brooks’ lie unheaded on the shelf.
Boringly predictable code is how I do that. Tools that automate very repetitive but error prone processes are part of that mix. In which case I definitely know the answer, I just really want to make sure I get it. This is, after all, how software got started in the first place. The logical conclusion of a story started by Monsieur Jacquard.
Some people get really uncomfortable in the face of such changes, but they are typically folks I have already identified as part of the complexity problem. Some can be converted, others cannot. We are poisoning the well and standing around complaining about it.
> Rarely do I encounter a program whose problems are in their essential complexity.
I'm not saying we don't also introduce non-essential complexity, but I think that working on distributed/interactive/concurrent systems and specifying them in TLA+, which allows us to expresses just essential complexity, will dissuade you of that notion. Because TLA+ allows you to either write a proof of an assertion or to automatically look for counterexamples with a model checker, it makes it easy to compare the relative difficulty of those two activities. Not only is proving much harder, in most cases you'll find that "intuitive" reasoning is just wrong.
But assuming "if we knew what computers do, we would not use them" is true, then the rest of his talk is pointless.
We use computers precisely because we know what they do. We're worried about black boxes and long for formal proofs of our programs because they are useful only when we know what they do.
If I hand you a black box with a button and two LEDs that blink in response to input, it would be initially useless to you precisely because you do not know what it does. Only if you learn what it does, by exploration or me telling you, can it become useful to you.
I think that an overly literal and precise reading of his pithy phrase misses the point.
Computers do exactly as we tell them to do, and we might also know what we'd like them to do but we usually don't know what we'd like about the result of what they do. Since the birth of computer science we've known that computers are mysterious in the sense that even though their operation is deterministic in a very natural sense, its outcome is not generally knowable; so deterministic but indeterminable. Dowek merely points out that this is not just a problem with computers but also the very point of them. If their operation's outcome were easily determinable, we wouldn't need it.
> I think that an overly literal and precise reading of his pithy phrase misses the point.
I suppose you're right. I get what you're saying, and based on that it seems what Dowek should have said was that "if we knew what computers would do, we would not use them".
For me that's a pretty crucial distinction though.
In any case, I agree that computation being deterministic yet indeterminable is indeed fairly surprising[1] and quite interesting, and indeed it is this potential for complexity that makes computers useful.
Surprise in this context means you didn't know the exact data the computer will output, not that you don't know how a computer or the program works.
Like if you ask a calculator to calculate sin(13), you know exactly how it calculates it but you didn't know the exact number so the result was "surprising".
I didn't mean it as such. I just found it wrong, and thus not very interesting, for the reasons I stated.
> Surprise is a measure of new Information (ref: Shannon).
Ah. Not read Shannon himself, so wasn't aware he used his own special definition of surprise. All the work I've seen has talked about information or entropy.
> To be unsurprised is to provide the answer before the computer produces it.
I'm still not buying that definition of surprise...
... If the answer to the square root of 91287346540 is unsurprising, you should just be able to say precisely what it is without working it out.
I think the issue is the definition of surprise is the issue here. It may be unsurprising to receive a birthday gift, but the exact gift is still a surprise.
We still would have used computers for data entry, storage, and transmission even though we know humans could do that because they're simply superior than paper and typewriters in most ways. There's nothing surprising about that.
> And the boosts we have seen are mostly due to the availability of open-source software, and widespread knowledge sharing on the internet.
Except the source of most productivity improvements is the dissemination of knowledge of improved techniques. A good analogy (https://johnhcochrane.blogspot.com/2019/05/free-solo-and-eco...) is with rock climbing. The rock climbers today are not much fitter or physically superior to climbers 100 years ago. At best there are minor improvements in shoes, but that's it. What allows someone to free solo El Capitan in 4 hours was standing on the shoulders of giants who had discovered and disseminated climbing techniques in the last century.
This effect is even more pronounced with software, because of how easy it is to reuse others' work. I don't need to learn from the collective experience of Linus Torvalds and other kernel developers, I just use Linux. I don't need to understand the intricacies of text rendering (https://gankra.github.io/blah/text-hates-you/) or compiling or the universe of hardware. I can still add economic value with my software while standing on the shoulders of these giants. I can gain knowledge from all the great documentation out there, as well as helpful Q&A websites. Imagine how many projects would have been abandoned and economic value lost without knowledge being disseminated on StackOverflow.
That is the productivity gain of the last few decades. If someone can't see that, or wants to condescend about the lowering of standards, they're missing the point.
That may well be (and there are techniques believed to have helped significantly, like automated unit-tests and automatic memory management), but the outcome is still worse than Brooks's prediction, which many rejected for being too pessimistic.
Today I can write an app that supports every language, not just English + (optionally) latin scripts. I can support every operating system, without worrying about platform quirks like fonts or rendering. I can deploy to it be available to every person on the planet within minutes, because I don't have to worry about managing my own hardware. I can harden my app in advance to security problems thanks to the knowledge that's out there. I have access to memory safe, highly efficient languages that didn't exist or weren't mainstream even 15 years ago.
If you asked a competent programmer to do the same thing 30 years ago, they would take at least 10 times more effort than I put in, if they finish at all. Likely they'd give up on Linux/MacOS support, give up on Chinese, Japanese and Korean support, code it in 90s PHP and hope they didn't have any security problems.
All of these are actual productivity improvements. Don't just measure time taken to complete a project. Measure actual work done and features shipped. We're doing more now, because we can.
I'm not sure I agree with that. I suspect that we have enough libraries available today that I can write 1/10 of the code. That's a 10x speedup for my programming.
Seems like a calculator is a good example of a program which we would write even though it is entirely unsurprising; even if we needed nothing else on a computer we might invent it still?
It seems to me that we create computer programs not to surprise us but typically to do things very quickly or conveniently. Even a complex beast like Photoshop is reasonably unsurprising.
The point is that the outputs of the calculator are fundamentally surprising, in the sense that if any individual programmer could easily know all the expected outputs in advance, they wouldn't need the calculator in the first place. Photoshop surprises us by calculating the precise location of millions of pixels based on relatively simple inputs like a handful of clicks on a screen. This logic does not apply to every single imaginable program, but it does apply to the non-trivial ones.
So "surprising" == anything not previously known? That's not the definition I'm used to. I might not know the exact answer to sqrt(10), but I wouldn't be surprised by the answer considering I already know it's between 3 and 4.
To add to the other answer, Shannon's information is also sometimes called "surprisal". And it doesn't describe only what is not known, but rather what can be predicted. So if you can predict new information, it has low surprisal, even if it previously wasn't known (with certainty).
I don't think we even need a citation to refute that.
Have a developer create something modern with technology from 1990. It would be close to impossible. Compared to what was available then, our building blocks are more akin to entire neighborhoods being delivered pre-assembled in order to create instant cities.
There are more libraries for different things, but in the 90s, there was a quite a bit of commercial RAD application builders, and that went out of favor.
So while it is nontrivial to build a web, or any modern (network-connected) application today with 90s tools, I am not entirely convinced that building an application today is easier than it was in the 90s. If anything, I suspect we build more from scratch, or leveraging existing open-source, rather than use prefabricated and commercial RAD tools (which are, and always were, pretty expensive).
Edit: I guess what I want to say, if you compare writing actual "business logic", it is still about as difficult as it was in the 90s. We can do fancier things by leveraging libraries (sometimes included in the programming languages), but that's it. In fact I even suspect that people in the 90s were more productive in writing actual business logic, because there was less distractions from the fancy technology (such as web). And I have seen 30-40 year old applications that nobody wants to rewrite, because their complexity is so high that it would take a long time, and it's not clear to me it would take shorter time to develop them today than it used to.
"I guess what I want to say, if you compare writing actual "business logic", it is still about as difficult as it was in the 90s. We can do fancier things by leveraging libraries (sometimes included in the programming languages), but that's it. In fact I even suspect that people in the 90s were more productive in writing actual business logic, because there was less distractions from the fancy technology (such as web)."
This has been my experience! I'm shocked whenever I need to automate some business process my organization alway starts at ground zero - what should the base technologies languages be? database server? authentication?
Seriously?!?! Why isn't there "workflow as a service" by now?
One simple example that I presented in the other comments: maps. I don't think we ever had (or that it was even possible in a non-networked/poorly networked world) commercial components for maps even remotely close to what services like Google Maps offer.
Similar thing for many tools and services we just take for granted these days (NLP, sentiment analysis, entire game engines, etc.).
The business logic was easier to write in 1990 because the scope was much smaller and the tools much more constrained. But our modern tools are 1000x more powerful, therefore harder to master, true.
I'm seeing that AutoRoute, which eventually became the basis of Microsoft Streets and Trips was first released in 1988.
It certainly didn't have real time traffic, and probably didn't have lane configuration information, or comprehensive business information, etc. But (from descriptions, not personal experience), it showed maps and did routing.
More complex mapping tasks are readily achievable now because more information is computerized and the demand is there, and the portable computing machinery to make it most useful is there.
> One simple example that I presented in the other comments: maps. I don't think we ever had (or that it was even possible in a non-networked/poorly networked world) commercial components for maps even remotely close to what services like Google Maps offer.
Maps are inherently nothing new, instead of a google search, I'd go to my local library and buy a physical map. No need for 3D rendered vector processing over an interconnected network.
When I think about 1990, I was still using unix, I was still using emacs to write code, was still writing SQL to work with relational databases. Sure the web wasn't really a thing yet, but all the fundamentals were there. The concept wasn't new; we had hypertext and interconnected networks. I don't know enough about deep learning to say whether the concepts were unknown then or just waiting on computing speed to catch up. AI research was certainly well underway long before 1990.
At least for the work I do, I don't think much has fundamentally changed since 1990. Convenience and speed is way higher, yes. Storage and processing power is immensely cheaper. And obviously we've got 30 more years of development of things we can build upon. But I think if a good developer from 1990 could be teleported to today, he'd be productive with today's technology in short order.
That minimizes all the building blocks we've built meanwhile.
The human writing the code will write about as much code today as they wrote in 1990, true. But as I commented elsewhere, the building blocks we have now would boggle the mind of a developer in 1990.
We have complete game engines, with scripting languages included, physics, advanced graphics and a myriad of things I don't even know, that you can actually use for free now.
We have maps that cover the world, with distance estimations, navigation instructions, street views of every street, etc., that a 12 year old can integrate on their website.
A developer in 1990 would be very productive today, yes. But that's because of everything that has been build in these 30 years. With the tools he had in 1990, his output would be meager by modern standards.
People didn't have the modern tools easily creating very slow programs 30 years ago, but I'd argue that it was because those tools weren't useful to them since their programs had the requirement that they need to run on machines with 10 mhz processors (with way less work done per cycle, no branch prediction etc). So most of those "productivity enhancements" comes from hardware being so fast that you no longer have to care about things.
And no, a game engine like Unreal isn't performant, it wastes a ton of ram and cpu on bookkeeping. It is a small fraction of a modern computer but back then you'd rather have all the resources instead of wasting a huge amount of them.
Well, wait a minute, which part? Commercially available microprocessors back then weren't fast enough to render 3D graphics the way they can today, but otherwise, most of what we do today was perfectly achievable then: if anything, UIs have declined since then because we're putting everything on the web (for pragmatic reasons) rather than writing custom desktop applications.
I've of the opinion that anything that was possible 20 years ago should be imperceptibly fast today, unless there is a fundamental physics limitation (e.g. speed of electric signals limiting real-time applications across large distances). Displaying a GUI, opening a file, listing contents of a directory. When the hardware has improved by several orders of magnitude, there is no excuse for being as slow as it was 20 years ago, let alone slower.
Screens have arbitrary resolution and dimensions these days and they range all the way down to a small phone.
Our state of the art (web) is still evolving towards better solutions to address that. Software in 1990 targeted a couple fixed dimensions and only for a single OS and a desktop input device.
Look at some of the cloud offerings and thing about how long it would take to implement those things from scratch with things available in 1990. Stuff like NLP or Maps.
Back then if you wanted to put a map on your website is was probably impossible. Now it's a widget a 12 year old can put on their website.
Well, he said 1990... there were no websites in 1990. I definitely agree with OP that the stuff that we do on the web today would have been impossible even in 1995 when the web was at least available, but that's because browsers were limited, not because programming was limited. In terms of programming, nearly everything that is done today could be programmed back then using the tools available, we're just dealing with larger volumes and higher latencies today.
As far as I know, the value produced by programmers per unit of time is extremely difficult if not impossible to measure. You could argue that a programmer in 2020 is not 10 times faster at writing 1990 style gui software, and maybe you are right.
However, if you would try to write a modern web application like Basecamp only with methods and technologies from 1990, this is not going to work out at all. There would be no web frameworks, no web server, no browser, no Ruby, no Java, no stackoverflow, no Google, no automated testing, no git, no stripe, no script, no css - no anything.
You would probably give up and go for something like a client-server application with Delphi and Oracle.
And you'd probably work with a waterfall development style, create tons of features that nobody wants, and then ship the whole thing as a bunch of floppy disks.
Who would pay for something like that? Even if you sold it at 1/10th of what people pay for Basecamp or Jira, no one would want it. It would be trash. There have simply been fundamental improvements in the capabilities of software, and in our abilities and methods of writing such software, so that it is not even possible to compare them, and yes, I would say this is at least an order of magnitude of improvement.
Having programmed since the 1980s, you're somewhat right.
Git is way better than pkZip and floppies full of dated zip files. It's not 10x better though. A single developer wrote most things back them for a single customer.
We had text with attributes, colors, images, all of that under Windows 3.1 and later. When you deployed a program, you knew it was going to run in Windows, on 640x480 or larger... maybe as big as 1024x768 if the user had thousands of dollars for a nice graphics card and a big Sony Trinitron Multisync.
We had multiuser databases, like Access, that allowed all of the users of a system to share data across their organizations. Programs were shipped on disks, either floppy or CDs. They then worked forever.
The Waterfall Programming model was meant as a joke. On the small scale, I did it once... the program took 2 months from start to finish... the customer was happy that we met the requirements, but then wanted more. We negotiated a deal and I spent the next year doing rapid prototyping (agile?) on site, with lots of user testing. That application deployed with hardware in one site visit, and was usually run forever after that with only the occasional phone call, or field trip for faulty hardware.
Things are NOT better today than they were. In Delphi, for example, every function had a working example included in the documentation. You didn't need to search Stackoverflow every ten minutes... it just worked. The fact that the deployment platform was known, and you had control over the code all the way made things incredibly easy to ship to a customer and support over the phone.
Today, I can write applications in Lazarus/Free pascal, and ship them in a single zip file. The customer's screen looks exactly like mine does, there's no need to worry about dependencies, internet connectivity issues, etc. Recently I reached back into my archives from 1994 for a string function I wanted... and it worked.
Things are mostly just different today, not mostly better.
I simply don't believe that if you combine all improvements of the last 30 years, software, hardware and management combined are not 10x better.
Where is the data that shows, that things are just different and not better? Things are different today, yes and for a reason. People demand different software today for a reason and the way software is made has changed for a reason.
What can you do today that you couldn't do in Windows for Workgroups? The hardware is massively better. I had a side-project last year that I tried to do in Python, because I had used it a bit in the past, and figured things just had to be better because it was decades since the last time I wrote something big from scratch.
It was a horrible experience, except for GIT replacing ZIP files to allow un-do. WXBuilder only generates python, it doesn't allow you to edit the results and go backward... a significantly less useful paradigm than Delphi in Windows, or Visual Basic 6, for that matter.
Eventually after much frustration, I managed to write layers to completely decouple the GUI from the actual working code, which was ok... but then I needed to change one of the controls from a list to a combo box... everything broke... 2 lost weekends of work... and got it working again. Any GUI change took 20+ minutes.
Eventually I gave up, pulled out Lazarus/Free Pascal, and re-implemented everything in about a week of spare time.
After that, GUI changes took seconds, builds took seconds. It just works.
I greatly appreciate the power of the hardware, and persistent internet instead of 56k dialup... but the GUI tools have gone downhill.
I tried Visual C++, but it generates a forest of code and parameters that you really shouldn't have to deal with.
Maybe I haven't found the right set of tools, but as far as I'm concerned, it was actually better programming in the 90s, except for the hardware, and GIT.
> What can you do today that you couldn't do in Windows for Workgroups?
1990 was two years too early for Windows for Workgroups and 11 years too early for lazarus. Nevertheless:
- Being able to choose between dozens of memory managed and open source programming languages without having to think too much about performance or memory usage
- The ability to comfortably do in memory what previously could not even fit on a hard disk
- Doing anything with (compressed) video in realtime
- Deployment and distribution of software to a global audience of users, using a broad range of device types, screen sizes and processor architectures, all in fractions of a second
- Setting up servers and fully managed platforms, data stores and databases at the touch of a button
- Full text search in a global database containing all documentation for any available software, including millions of Q&A articles, in a fraction of a second
- The ability to use a variety of third-party online services for automated billing, monitoring, mailing, streaming, analytics, testing and machine learning
- ...
You could probably continue this list for a long time and find many more such improvements that have been made in the past 30 years. If you could find just a dozen of such improvements, each giving you just a 20 percent productivity advantage, you would already have a compound 10x improvement.
By the way, I think to make a fair comparison, we should not compare what is mainstream today with what was leading edge in 1990, but with what was mainstream in 1990. The difference between leading edge and mainstream is 10 years or more; so the question "What could you do in 1990" should be "What was typical in 1990".
We didn't have to set up servers all the time, they just ran, for years, without interruption. Some machines had uptimes in decades.
We had worldwide software distribution, before the internet. BBSs, Shareware, etc.
UseNet had every support channel in the universe.
Email involved routing through ihnp4
Many things are clearly better, but IDEs really didn't keep up.
Today, you don't have to set up servers all the time either. But you can and that is a huge advantage.
You don't need to hire a special person to constantly optimize your database server and indexes. You don't need sharding, except in the most exotic use cases. You don't need to manage table ranges. You no longer need to manually set up and manage an HA cluster.
> Some machines had uptimes in decades.
What machine had uptime in decades (i.e. >= 20 years) in 1990? Did you have access to such a machine?
I agree that it is nice to be able to fire up a machine from a script. Back in the days of MS-DOS, it was entirely possible that your work system consisted of a few disks, which contained the whole image of everything, and you didn't hit the hard drive. That's pretty close to configure less systems.
As for databases, they were small enough that they just worked. Database Administrators were a mainframe thing, not a PC thing.
I didn't have a huge network, only a handfull of machines, but one of my Windows NT servers had a 4 year uptime before Y2K testing messed things up.
A friend had a Netware machine with 15 years of uptime... started in the 1990s.
Moore's law and the push to follow it has given us amazing increases in performance. The software that runs on this hardware isn't fit for purpose, as far as I'm concerned.
None of the current crop of operating systems is actually secure enough to handle direct internet connectivity. This is a new threat. Blaming the application and the programmer for vulnerabilities that should fall squarely on that of the operating system, for example, is a huge mistake.
It should be possible to have a machine connected to the internet, that does real work, with an uptime measured in the economically useful life of the machine. The default permissive model of computing inherited from Unix isn't up to that task.
Virtualization/Containers is a poor man's ersatz Capability Based Security. Such systems (also know as Multi-Level Security) are capable of keeping the kernel of the operating system from ever being compromised by applications or users. They have existed in niche applications since the 1970s.
For the end user, lucky enough to avoid virii, things are vastly improved since the 1980s. The need to even use removable media, let alone load stacks of it spanning hours, is gone. The limitation to text, without sound, or always on internet, sucked.
But, in the days of floppy disks... you could buy shareware disks as your user group meetings, and take the stuff home and try it. You didn't have to worry about viruses, because you had write protected copies of your OS, and you didn't experiment with your live copies of the data. Everything was transparent enough that a user could manage their risk, even though there was a non-zero chance of getting an infected floppy disk.
Programming is hard to improve because it's not very simple to know what programming ideas are good and what ideas are bad. The feedback loop between decision and consequence is so long and opaque that if something works or if it doesn't work, it's not possible to easily point out why. To make things even more complicated, most good programming ideas are only good in certain situations, and terrible in others. On top of that, we're not even very good at knowing what our software is supposed to do. Building a race car is a lot more complicated than writing a simple node web app, but it's way easier to tell if you have a good race car than it is to tell you have a good node web app. If we could agree on what improved programming was, it might be a lot easier to improve. But everybody has their own opinion about what ideas are good, and in what context those ideas are good. We can't even agree about what's important to optimize. Which leads to situations where you can find highly experienced experts passionately disagreeing with each other about absolutely any give topic, and also have experts and charlatans in complete agreement, with no easy way to tell them apart. How would we even be able to tell if it has improved?
I also think that the biggest problem we are facing is, that we still haven't developed any useful metrics or measurements of quality in our field. There are some easy things like performance or resource consumption, but:
Others like security, robustness, maintainability or plasticity are equally important, yet different approaches can not be compared numerically only by argument.
> If you build something that’s too difficult to learn, but very productive you’ll turn a lot of people off.
Counterexample: C++. To a lesser degree, perhaps also Rust? (Disclaimer: I don't know Rust.)
> The problem is, within a few weeks of using any paradigm developers usually have built a repository of habits that keep them from making mistakes.
No, developers constantly make mistakes. Even after all these decades, C programmers are still writing code with buffer overflows. The way to solve that is to have better tools: use a safe language, or formally verify the C code. At this point, we know that it's not enough to hire brilliant C programmers and have them exercise care.
> Things have been getting better. The growth has just been in the ecosystems and not the paradigms themselves.
I don't think there's a bright line between these two.
Good progress is being made in making formal verification more approachable, for instance. This is no small thing, it takes hard work in computer science, but doing so lowers the barrier to the paradigm, making it more practical for more developers and for more problems.
Merely thinking up the paradigm, it would be neat to have tooling to verify that our code matches our formal model, is only the first step. In a sense the paradigm only really exists after the hard work has been done.
In a similar vein, recent advances in low-pause concurrent garbage-collection might broaden the scope of the problems that can be solved with garbage-collected languages.
> The legacy is the value, it can’t be thrown out.
I broadly agree. See the classic blog post from Spolsky, Things You Should Never Do, Part I, on how throwing out working code is very often a mistake:
>At this point, we know that it's not enough to hire brilliant C programmers and have them exercise care.
I mostly agree with your points. In my experience (never worked at FAANG), the brilliant C programmer hypothesis is almost untested, since most companies hire mostly average C programmers. It's only logical that non-brilliant programmers will be working somewhere.
Even if a company has a few brilliant C programmers on staff, they won't be able to find all the issues the mediocre programmers produce.
This is why I think it's about time that any safety critical industry should think of retiring C in favor of safer alternatives.
Rust would personally be my favorite since it solves most of the really terrible issues around memory management in C and C++ (mediocre programmers not being able to compile due to the borrow checker is a feature).
Of course Rust doesn't have the necessary certifications yet, but industry would be wise to contribute. Such efforts are underway (Sealed Rust by Ferrous Systems), but I don't see big aerospace, automotive and industry sponsorship here, i.e. those who would benefit the most.
There is a safer language that is certified that is often ignored, which is ADA. It's actually quite modern in terms of features since the 2012 release. It has some other concepts than Rust for safety, but that doesn't mean it's worse. The heavy lifting in terms of design and certification here has been paid for by the DoD and aerospace companies, so there's no reason other industries couldn't use it.
> the brilliant C programmer hypothesis is almost untested, since most companies hire mostly average C programmers. It's only logical that non-brilliant programmers will be working somewhere.
Chromium has suffered from buffer-overflow issues, and that's a high-profile security-sensitive C++ codebase run by Google.
> Rust doesn't have the necessary certifications yet, but industry would be wise to contribute
I'd rather avionics code be written in verified SPARK than in Rust, but if they can build a rock-solid Rust compiler it would be good to have it compete in that space. I imagine it could be a nice alternative to MISRA C++, for instance.
As you've probably gathered from my mention of SPARK, I agree with you that Ada is sadly overlooked.
Turns out that the basic idea of an imperative language that's much like C/C++ but is much safer and better suited to automatic runtime checks, isn't a new one. Ada does this, it's existed since around 1980, it's mature, there are various compilers for it approved for life-critical work, and its performance matches that of C if runtime checks are disabled.
It's comparable to Zig and Rust in that regard, but those languages aren't the first to aim for C performance with far superior safety properties.
> there's no reason other industries couldn't use it.
Right, especially considering the fine work done by AdaCore, who make their tooling available as Free and Open Source software.
Perhaps Ada's reputation for being boring is its greatest vice, as well as its greatest virtue?
>Chromium has suffered from buffer-overflow issues, and that's a high-profile security-sensitive C++ codebase run by Google.
Like I said, I haven't worked at Google. But are you sure every programmer on Chromium is a genius? Google has a hard Interview process, but does it filter out every bad engineer?
And if Google's "probably better than other company's" engineers can't do it? How can we expect other companies to deal with C?
>I'd rather avionics code be written in verified SPARK than in Rust, but if they can build a rock-solid Rust compiler it would be good to have it compete in that space. I imagine it could be a nice alternative to MISRA C++, for instance.
Right now SPARK would definitely be favorable. But I think even current Rust would be much safer than MISRA C++.
> are you sure every programmer on Chromium is a genius? Google has a hard Interview process, but does it filter out every bad engineer?
I suspect that yes, it filters out all candidates who are outright bad. I don't think this point really matters though. If even Google aren't able to produce a large C++ codebase free of buffer-overflows, even when it really matters, it suggests that no-one can. edit This doesn't need to be an argument drawn from a single sample, either. Buffer overflows happen to just about all major C/C++ codebases. OpenSSL, the Linux kernel, Windows, the Apache Web Server, nginx, etc.
We could quibble about the way they don't use the sort of methodology used in avionics software development, but I don't think there's much value in exploring that. If Google consider that to be too slow/costly for Chromium, it highlights how rarely those methodologies can practically be used.
I don't think there are any individual C++ programmers who are too smart to ever introduce undefined behaviour, but even if there were it wouldn't matter. As you indicate, real large-scale software development happens with teams, not lone geniuses.
> I think even current Rust would be much safer than MISRA C++.
I agree that the language might be a reasonable choice, but I don't think the Sealed Rust project is anywhere near delivering a Rust compiler that can be trusted with your life. Even mature C compilers can have serious bugs, [0] and compiling Rust is far more challenging than compiling C.
I've run into (multiple) compiler bugs in certified C and C++ compilers, I know how difficult this is. But still a much larger percentage of bugs I've seen was introduced by the developers. Which is why I said even now Rust would probably beat MISRA C++ in terms of total amount of bugs. I do think Sealed Rust has a bit of a way to go, but it would be a worthy investment.
Seen from the point of total risk, the compiler errors are much harder to find, but the developer errors can still kill you.
In any case SPARK is indeed even farther ahead in terms of having a reliable toolchain paired with a language that makes safe code easier to produce.
"Even if a company has a few brilliant C programmers on staff, they won't be able to find all the issues the mediocre programmers produce."
Also, I am very sceptical, that even brilliant C programmers won't have a bad day, lack of sleep or distracted by personal problems - to avoid misstakes.
And about ADA, have you experience with it? What could be the reason, it is not used on a broader base? I assume also lack of libaries, etc.?
I mean, there are millions of free and open C and C++ libaries around for allmost any usecase, I assume this is not the case with ADA?
Sorry to double reply but I see I missed some of your points:
> Basically with ADA, you develope much slower (and performance is usually lower)
Ada deliberately emphasises readability over writeability. It might take a bit longer to do initial development work in Ada than in C, although it's likely that you'll encounter fewer bugs, so Ada might win out even here. Subsequent maintenance is likely to be cheaper/easier. I believe comparison studies have borne this out, although I'm not certain how trustworthy they are.
As for performance: with runtime checks disabled, Ada should perform about the same as C/C++, as its abstractions don't tend to have runtime costs, it's just pretty plain imperative code. Ada isn't typically used with garbage collection, for instance. Like C++, you have the option of using the language's OOP features.
With runtime checks enabled, you'll pay some runtime performance cost (let's say very roughly 15%, to simply make up a number out of nowhere). C and C++ don't give you the option of enabling as many runtime checks as Ada does, due to the free-form way arrays work, for instance. gcc and clang both give you the option of enabling runtime checks for certain kinds of errors, such as dereferencing NULL, but plenty of other kinds of error won't be caught.
> But if you want reliability, it does sound good.
Ada has some traction in the critical-systems space. The Boeing 777 flies on Ada, for instance. Of course, C and C++ are also both used for safety-critical software.
I think that's a part of it. Few GUI toolkits can easily be used from Ada, for instance. Ada has excellent support for interfacing with C code, but making decent bindings is never trivial.
I think lack of hype might be part of it too. Ada is seen as a stodgy and boring language good for writing autopilot software, whereas Rust and Zig are seen as new and exciting general-purpose programming languages. I'm not saying Rust and Zig bring nothing new to the table, of course, as they both do, but in terms of safety and programming with an emphasis on correctness, Ada has a lot to offer.
I have only hobbyist experience with ADA. It didn't seem nearly as hard as a lot of people said it was, and I was intrigued about in which ways it's safer than C.
My best guess is that people are unfamiliar with ADA, unwilling to learn, and most of all believe that hiring will turn into a problem if they switch to ADA. Also they bring up the brilliant programmer theory.
The hiring problem seems superficially compelling.
There are indeed fewer ADA programmers out there, which of course is a chicken and egg problem if companies are not trying to adopt it.
Now the fourth argument often follows the third argument. My problem with it is that most of the programmers I've worked with that I'd describe as brilliant are the first ones to tell you that we should use a safer language, because they know even they make mistakes. And a lot of the brilliant programmers wouldn't have trouble picking up ADA or learning something new.
So my answer would be that if you truly have brilliant programmers, you don't need to worry about hiring. If you don't have those brilliant programmers you really can't afford staying with C because you will have huge costs from debugging and being sued for the bugs you didn't find.
A process that can not survive a single programmer having a bad day is not a high reliability process. That problem is trivially easy to "solve" in any number of ways.
1. You can have multiple people whose job is to fully understand that code and review it.
2. You can have independent 3rd party reviewers whose job is fully understand the code and review it.
3. You can have adversarial reviewers from your competitors review the code with veto power over your code if they discover and can prove there is a defect.
4. You can have a requirements doc that traces to the code and a corresponding test to exercise that property.
5. You can verify that everybody understands the source by reading the compiled output to verify that it correctly traces to the expected source constructs.
6. You can have multiple people develop independent systems with the same functionality to run in parallel to crosscheck results with each other.
7. You can have those systems developed in isolation from each other so that they share no code or ideas giving even greater independence.
8. You can run these systems on the same and different hardware in parallel to provide even more independence and crosschecking.
Every single one of these raises the bar from a single programmer having a bad day to requiring multiple people independently failing on the same code which is multiplicatively less likely. Every single one of these is something that is actually done for high criticality avionics software. Any "high-reliability" process that seriously considers a single programmer having a bad day as a material unsolved risk is made by complete amateurs. This does not mean high criticality avionics software is perfect, it just means that the concerns are more about actually hard problems such as how to prevent errors where a dozen brilliant programmers cross-checking each other will be unable to prevent the problem without the process.
To put the difference in reliability between these systems into perspective, a "high-reliability" commercial system like AWS only promises 99.99% uptime in their standard SLA. In contrast, the 737 MAX, which most people consider an absolute deathtrap and evidence of the terrible quality of avionics code, had one failure per ~200,000 flights or 99.999% per-flight reliability (10x the reliability if we compare number-of-flights to seconds-of-uptime). If we were to consider all high-criticality avionics software, avionics software has not been implicated in any crash for the last 10 years except for the 737-MAX, so at ~10,000,000 flights per year for a total of ~100,000,000 that means avionics software has 99.999998% per-flight reliability or 5,000x the reliability if we compare number-of-flights to seconds-of-uptime.
The 737 Max crashes weren't the result of failures in software development. As far as the crashes go, the software engineers built the system they were asked to build, and the problems were at a higher level than the software (i.e. aeronautical engineering and systems engineering).
Software issues have since been discovered, but aren't thought to have been related to the crashes, as I understand it.
I actually happen to agree with you, but I deliberately decided to take the worst possible interpretation of the lowest reliability system (that is worse by orders of magnitude than the average) to highlight the differences in outcome. If the absolute dead-last system, a system that most would view as shameful and many an unconscionable deathtrap, is an order of magnitude better than best-practices in commercial software, it is probably not a good idea to learn from commercial software vendors or use any knowledge based on their inexperience as they can not even consistently make systems as unreliable as an unconscionable deathtrap, so there is no reason to believe they actually know how to make systems of higher reliability than that.
> Counterexample: C++. To a lesser degree, perhaps also Rust?
Are these actually counterexamples? While in absolute terms there may be a fair number of people using them, in relative terms the amount of people who know C++ and/or Rust is absolutely dwarfed by the amount of people who know "easier" languages like JS, Python and PHP.
In this case "easier" refers to the amount of effort needed to pick it up and get some result on screen, not to the amount of effort to be a guru. I think for a lot of beginners productivity and even (memory) safety is completely irrelevant. They are not very productive anyway and their code crashes all the time because by definition they make beginner mistakes. What is more important for beginners is to keep up momentum and get through the initial hurdle of "I don't understand this" to the point where you start thinking "this is fun, I can make cool stuff". Rust and C++ solve professional problems, not beginner problems.
While in absolute terms there may be a fair number of people using them, in relative terms the amount of people who know C++ and/or Rust is absolutely dwarfed by the amount of people who know "easier" languages like JS, Python and Ruby.
Are you sure? JS is huge today, not least because it's the only game in town on what is currently the most popular easily accessible platform. However, I've never seen anything to suggest a huge difference between the number of programmers who know C++ and the number who know Python, for example. Both are very popular languages with at least a few million developers who know them. At that scale there's no reliable way to get an exact count and the difference doesn't really matter anyway.
> However, I've never seen anything to suggest a huge difference between the number of programmers who know C++ and the number who know Python
Anecdotal evidence here but, I hire for both and it’s definitely easier to find developers that actually know Python and what Pythonic code looks like.
> new programming languages including some flow & graph based paradigms and no-code visual builders.
Does dressing up Turing-complete programming as visual graphs change anything, fundamentally? Does it make software easier to understand and maintain?
In my experience, when people not trained as programmers (or perhaps even programmers) start using visual programming tools in real-world projects, what results is often unmaintainable spaghetti. For example, see https://blueprintsfromhell.tumblr.com/
But perhaps I misunderstood what the author meant with the above blurb.
Five lines of code, or five rectangles connected by arrows, both are simple.
When people promote visual programming, they make it sound like the five rectangles will replace thousand lines of code. But if that is true, why not simply implement a library with functions equivalent to the individual rectangles, and replace the code with five function calls?
Instead, when doing anything nontrivial, we get pictures with thousand rectangles connected by arrows, for pretty much the same reason we previously had thousand lines of code. It's because we want to implement functionality that is not trivially expressed by the provided primitives, duh.
One of the things I've taken to doing on the semi-annual "Hey HN I've Fixed All Programming Problems Forever with $THIS" posts is encouraging people to head straight at these hard problems. I've also been encouraging the commenters writing the writing the "dear lord it's so obvious that we just need to drop everything and use $THIS" to be thinking more about the hard problems.
I believe I've now issued my challenge to the Visual Language of the Day developers to three different people to not show me how to add 1 over a list, but to implement Quicksort (or an equivalently complex algorithm of your choice) in your new graphical language and show me how it's at least as easy to understand as the current paradigm. I chose that one on purpose on the grounds that it's actually not that understandable in normal notation, so I didn't choose that to be difficult exactly, I chose it as something that has some space for improvement, and, like I said, I'll accept any other non-trivial algorithm that you can make a decent case for being easier. I think only one of them actually drew it out, and it wasn't terribly nice; the others have ignored me.
Bret Victor's "live views of the variables updating" stuff looks amazeballs in a crafted demo where only a couple of entities have their X+Y positions displayed in real time, with real-time modification. But it's actually not as new as some people think; I've used debuggers with that capability for a while now. It gets a lot less "ooo.... aaaa..." when you've got hundreds of live-updating values on the screen, and when you've got structured values being manipulated, like, something is adding values to a list as you go, or adding entries to a hash or something. It's actually still really useful and perhaps underused by some people (debuggers, people, I'm down with claims you shouldn't live in them but you need to learn how to use them when you need them!), but it's been around for a long time actually and in practice it isn't quite as game-changing as it looks, because it doesn't solve the problem of finding the important data. Figuring out which parts of a log are important and figuring out which of the hundreds of changing, complicated variables are important aren't that different in terms of effort.
In a nutshell, it's scale. A lot of these alternate "Solutions to Everything" don't scale. For all the faults of "text based programming" (instead of a richer manipulation system, or a visual system, or whatever) and our documentation schemes and the way we debug and how we write our code guarantees and so on and so on... they work. We build large systems with them at scale, routinely. The large systems are perhaps ugly, prone to failures, coughing smoke out of many orifices, and often built of complete garbage, but they work. These "Solutions to Everything" tend to break down at scales multiple orders of magnitude smaller than the current-practice competition.
By no means does that mean nobody should be working on new ideas, be they revolutionary rewrites of the entire status quo, or small improvements to make a lot of people's code a little better. My point is more that especially if you want to rewrite the status quo, you have got to focus above all else on how your solution is going to scale. Your cute little demo that fits on a screen is still going to be necessary to introduce people to your idea, but show me how your system looks with several thousand of whatever it is you are doing, at the very least. I also tell you that my standard isn't "does that look good", because honestly, at that scale nothing looks good because it no longer fits in human cognition; my standard is that you need to have something better than the current standard, which also doesn't fit into human cognition. You don't need to propose something that we can rewrite operating systems or office suites in, there's plenty of other niches, but you do need to propose something that can go beyond a single screen.
The main function programming languages face is not serving up bite-size bits of functionality in comprehensible ways, it's providing ways to slice things into bite-sized bits that fit into my pathetically limited human cognition in the first place. I don't need a better way to understand that this bit of code adds one to everything in the list, I need better ways of breaking problems down into smaller pieces without incurring disadvantages in system organization, performance, composability, etc.
Because (my personal opinion) it is not programming that has to improve. A, B, then C if D is okay and humane. It is weak base libraries that expose and require you to control the irrelevant junk have to improve/vanish. Your basic app, be it web, desktop or mobile, is full of bs and boilerplate of methodologies (not even of just code) to the level that one has to learn to create cruds and component designs. You create and secure endpoints, validate dtos, etc over and over again. For what? Why not have an orm/db system that easily spans server, client and b2b? Why not have a default global message bus that connects directly to user? And so on, depending on business requirements.
Instead we have to spend a week or more to learn and remember how e.g. to make “webpush” work and how to serialize an xhr form. Meanwhile our libraries brag about yet another cross-platform #flatten() crap that helps no one but those who dig into this bs deeper than their peers to call themselves a senior. Of course normal people consider that hard, as we wallow in this madness and try to look smart.
well, the idea is that there are already "do-it-all" frameworks, although they are linked to specific programming languages or paradigm (ruby on rails comes to mind)
Sadly, rails, django and other “opinionated orm & http” are two or three steps below than what I meant. Some commercial things may get close in functionality (outsystems, salesforce, 1c enterprise to name a few, but not exactly), but not in ease of learn and use, and usually are too focused rather than being general purpose.
As programmers, I think we're way too stuck to the idea that programming is a keyboard and mouse exercise when we are trying to envision the future. Like many other people, I've been struggling with carpal tunnel during the pandemic and I've switched to dictation based coding. Since making the switch, I've realized how much of how I code is specifically due to the keyboard being my way to tell the computer things.
And graphical programming paradigms that I've seen are really just switching from keyboard to keyboard + mouse.
Right now, dictation is still at the point where it's a royal pain in the ass to have it as the primary input method. As soon as that changes, I expect that we will see a flurry of new programming paradigms.
Code is best represented as text when it is input (and modified!) as text. My current dictation set up includes lots of it asking me "did you mean X,Y, or Z" and as natural language processing improves, those questions are going to get a lot more interesting. Even if NLP weren't going to improve, dictation+keyboard+mouse(+touchscreen+eyetracking
...) based programming will probably be a very different experience from keyboard+mouse based programming. Until then, I really only expect incremental change.
What should be noted is that the author already did the low-code/no-code thing as a startup that is still active and thriving: https://www.dropsource.com
So what he's doing and describing now is informed by that experience, which he talks about at length in the podcast episode. I find it much more insightful and less superficial than a lot of the writing on the topic.
One issue that's been on my mind is how to actually fund a new programming paradigm. As far as I can tell there are the following options:
1. Start a business and seek investment (examples: Unison, Dark). This is tricky since programming environments are hard to monetise (all the competition is free), and the investment is quite large (you need a great team working for many years to maybe make something good enough you can actually attempt to charge for it).
2. Do it as a hobby project (examples: many of the small PLs nobody uses in production). This is hard because it takes a LOT of work to just get a good programming language, and then you need to also produce great tooling, a decent number of good libraries, amazing learning materials, etc.
3. Academia (examples: Haskell, Idris, Elm, ...). This can work, but generally the focus will need to be on some academically interesting problem. But a lot of what it takes for a language to succeed is a lot of down to earth engineering, which academic positions aren't generally great at (i.e. it took Haskell about 20 years to start being somewhat successful outside academia).
4. Corporate sponsorship (examples: Elm, C#, Swift, ...). This generally needs alignment between the language and the corporation's goals. This typically limits the amount of radical inventions you can make, as well as how freely you can pivot to new ideas.
Fundamentally none of the above are readily useable for radical reinventions, but more for incremental improvements.
I think programming is easy for lots of people, we (devs/techies) just don't talk about it because it happens in spreadsheets. Excel is the most successful IDE.
I've come to suspect that syntax is a MacGuffin and programming languages are a dead end.
We're complexity junkies and I suspect we know darn well (some of us consciously but most of us only unconsciously) that if we created a truly usable-by-the-masses IDE it would put us out of work, we wouldn't get paid to mainline our drug of choice.
You do realize that this is just like saying that "those pesky scientists out there don't cure cancer because that would put them out of the job"? It takes just one scientist to break this worldwide cartel, and he immediately will become billionaire and the one the most influential and famous persons of the century. Can you spot where your predictions contradict game theory?
Besides, if there is one thing that programmers like to do, it's developing programming tools. Personally, I'm astonished by the amount of great tools available for free to everyone and the sheer amount of labor that people put there in their free time. Linux [1] alone is mind-boggling, and it's just a tiny fraction of the free codebase available to build your software on top of.
Given that, I fail to see how it's possible that millions of developers in the industry somehow participate in anti-productivity cartel. Especially that it's potentially a very lucrative market, as you could cell this IDE in millions and not worry about the future, whatever it would be.
[1] I know that most contributions to Linux are made by professionals on payroll, it doesn't change much overall.
> You do realize that this is just like saying that "those pesky scientists out there don't cure cancer because that would put them out of the job"? It takes just one scientist to break this worldwide cartel, and he immediately will become billionaire and the one the most influential and famous persons of the century. Can you spot where your predictions contradict game theory?
I think in that situation game theory predicts that that scientist is found dead in a burned out car with a bullet hole or two in the skull, eh?
But I'm not positing a conspiracy of mustache-twirling villains. It's obvious that the vast majority of programmers are well-intentioned. But it's also obvious that we ignore our own history (and we're slaves to fashion, but that's a tangent.)
Engelbart "cured cancer" in 1968[1], Alan Kay and co. "cured cancer" in the 70's[2], and Niklaus Wirth "cured cancer" in the 80's[3]. Things like Alice Pascal were around in the 80's.[4]
> Besides, if there is one thing that programmers like to do, it's developing programming tools. Personally, I'm astonished by the amount of great tools available for free to everyone and the sheer amount of labor that people put there in their free time. Linux [1] alone is mind-boggling, and it's just a tiny fraction of the free codebase available to build your software on top of.
That's a symptom of the problem, not an intrinsic benefit, most of that software is little more than video games for devs from the POV of human productivity. It's a sickening thought but BASIC has done more for the average person than Lisp. Compare and contrast Red lang to Rust, or Elm lang to the whole of the JS ecosystem. It's a bitter pill, but it's clear that we're playing by different rules than what we tell ourselves and others.
> Given that, I fail to see how it's possible that millions of developers in the industry somehow participate in anti-productivity cartel.
Arguably we are all doing that all the time to a ridiculous extent. Agriculture is an "anti-productivity cartel" (to the extent that it destroys topsoil and fertility over time), our entire housing system is an "anti-productivity cartel"[5], compared to possibilities!.[6]
How is it possible that millions of refrigerators have doors rather than drawers? Every time you open the fridge you spill cold air on the floor and warm air replaces it and you have to pay for the electricity to cool that air off. It's an anti-productivity cartel!
> Especially that it's potentially a very lucrative market, as you could cell this IDE in millions and not worry about the future, whatever it would be.
The links and reasoning you provide are in line with the post article, i.e. that there are multiple reasons for current situation that are grounded in the real world. And that even though we could have better tools, we're constantly hitting local Nash equilibrium where it doesn't make sense to adopt them. Which means that your "cartel" is just the real world itself.
We might not like it, it may suck, but it is what it is - you don't wanna fight things like gravity, you have to build ugly and bulky flying machines until one day you have a teleport which will be so far ahead that you can throw away all your planes and airports on a whim. But until then the planes is your best bet.
> Agriculture is an "anti-productivity cartel"
This is a very good example, and again it's in line with the headline article. Modern agriculture is the product of the same pressure to the system: there is no way you can scrape existing process and rebuild it with better technologies from the ground-up, because you'll starve billions to death in the process. Also in this case nobody is optimizing for the parameters those solutions are better at than existing methods, so there we have it.
> How is it possible that millions of refrigerators have doors rather than drawers
Mine has drawers in the freezer [1] - it's very neat indeed, and it's also an incremental thing because the main fridge area (don't know the English word for it) is still a single door. But again, there are real world reasons it's done this way, most of all being it's just more convenient this way and your regular household doesn't care for extra $5 it'll spend on the extra cooling. Meaning, it lies in the nature of the system consumer - the humans.
> Which means that your "cartel" is just the real world itself.
Well, it's your "cartel". I don't think an unconscious conspiracy really counts as a nefarious cartel. But the effect is very real: we ignore things like Red and Elm in favor of things like Rust and JS-et.-al. and I think a large part of it is that we know, deep down, that we're getting paid to party, effectively.
As far as it being "just the real world itself", sure, because that's tautological.
> you don't wanna fight things like gravity, you have to build ugly and bulky flying machines until one day you have a teleport which will be so far ahead that you can throw away all your planes and airports on a whim. But until then the planes is your best bet.
But bad software isn't gravity, it's just bad software. Westinghouse changed the emergency brakes on trains from default open to default closed (when the pressure failed) and saved who-knows-how-many lives and dollars (from trains not rolling down hills when the pressure failed.) We are just really stupid. Even when we build trains and things.
We should be so lucky to have the constraints of aviation on our software. Wouldn't be so broken and wasteful then.
Again, Englebart invented the "teleporter" in 1968. (Changing the content of the metaphor doesn't unbreak it. We have the magic teleporting cure for cancer already, and have had it for decades.)
> there is no way you can scrape existing process and rebuild it with better technologies from the ground-up, because you'll starve billions to death in the process.
Yeah you hear that a lot but it's not true. (E.g. "Treating the Farm as an Ecosystem with Gabe Brown Part 1, The 5 Tenets of Soil Health" https://www.youtube.com/watch?v=uUmIdq0D6-A )
It turns out again that we are just being stupid. It's an easy fix.
The author nails the fundamentally hard problem in software.
If there is a new tech in bridge building, then every bridge going forward can use it, but software is successful because of its layers of abstraction and network effect. You can't introduce a new superior tech unless the huge amounts of existing tech and tools can first be made compatible with it. And that's harder to do because of all the degrees of freedom available in software rather than the relatively few in physics.
And even if a method could be discovered to make them compatible there is no way for anyone to snap their finger and update them all - it takes tons of work. And who is going to convince all of these other people to stop what their doing to invest in supporting this better tech? Even if this better tech is there, how long will it take them to really understand the correct mental model to apply it to their tool. Functional programming is now still slowly gaining adoption because it's a different way to think. It takes us forever to learn and adopt a better method.
And even if people could be made willing to adopt a better tech and do all of the above, how can we show a tech is better? We don't even have an objective method of evaluating what better is. It's all hearsay and opinion. So everyone needs to evaluate it on their own. But even if people were willing to do that, we don't even have an objective definition of what "better" is in software tooling!
The problem is, in evaluating software, we are pre-scientific method.
The single largest break through any one person could have, the only "singularity tech" in software development would be for someone to design a model to represent an objective human evaluation of software language tool, method or tool. Something that allows us to objective compare languages and automate the search space of fixing better tooling. Without that it will continue to take 40 years to adopt clearly better ideas.
Until we can even say the equivalent of basic physical comparisons of "objective x is heavier than object y", or "object a is faster/stronger than object b" using an agreed upon standard progress will be fits and spurts.
Imagine Civil engineering over the centuries and people can look at two bridges and be unable to tell which is stronger or wider or longer or costs more to build. Think how worse off our tech would be.
I feel like as an explanation this fails: why do these arguments apply only to programming and not other things, why do these arguments apply to the present and not the past?
Or are we also arguing all things are hard to improve, and that programming hasn't effectively improved since the beggining.
Let's ponder on one example where programming was fundamentally improved in the last 10 years.
Git
Some developers adopted immediately.
Others were dragged into it kicking and screaming.
Some ignored it (managers).
In my humble opinion improvements on a given tool latch
on the esoteric definition of what is considered "better".
Better for me?
Better for the company?
Better for the machine?
In the case of Git, the case is base on the "me" (Linus Torvalds).
In the dev-world we had the joke that goes: If you don't
like the tool/language go ahead and write your own.
My take away is that improving programming languages and tools,
can happen if we take the joke above at heart.
Re: The same things that make Scratch easy to learn also make it an unproductive environment for more serious programmers. For a professional, programming with drag and drop is way slower than keying in code — that’s just a fact.
I would say the best tools allow one to switch back and forth between visual/drag-and-drop and code as needed. Web-based UI's never seemed to get that to work well. The stateless nature of the web is a productivity kicker in multiple ways. The same number of features could be created and managed by fewer programmer hours in IDE's from the 90's than modern web stacks in my observation.
Web stacks are so bloated that orgs need layer specialists to manage it all. "Separation of concerns" (SOC) is to fit the shape of dev staff, not the shape of productivity nor the domain. It just creates busy-work interfaces between all those layers; a time drain. The older tools were closer to the domain such that it was less code to do the same thing, making SOC not worth it. SOC is symptom of bloat, not a solution. Concerns naturally interweave; the forced walls are just to serve Conway's Law.
The counter argument is often, "yes, but we have choice". Sure, but you pay dearly for it.
I believe a stateful GUI markup standard could help bring rank-and-file office-oriented CRUD productivity back to 90's levels. One could then spend time on domain issues instead of gluing bloated buggy stack layers together. I can't say it will help all domains, but one-browser-fits-all is failing CRUD, if productivity matters.
They say you can narrow down from all of humanity to a single person with just 33 yes/no questions. I think a similar principle could be true for programs, if we just knew the right questions to ask!
I think this is what makes programming hard to improve. We have a poor understanding of what kinds of programs people want to write.
> I think a similar principle could be true for programs, if we just knew the right questions to ask!
Those right questions to ask are VERY hard to find, so we programmers have to write many many very accurate answers about how our programs should behave. That process is called programming.
So many replies, I'll reply to you but it's really for everyone. The thought experiment was just meant to illustrate how much easier things could be if we knew better what programmers wanted to write. It was not a serious suggestion for how programming should work.
What I do think should happen, and will happen, is that we will keep getting a better understanding of what behavior people want most of the time -- including the common variations, and make it ever easier to achieve it.
The ideal programming language would have only two commands: do_what_i_want() and do_it_faster(what). You could use them like this: do_it_faster(do_what_i_want()).
> if we knew better what programmers wanted to write
It's very simple: they want to write, what was not written before. What fun is to write what has already been written? You can just copy it. Most of what was written is typically embedded in some library/crate/module already. That is an ongoing process which will never* end. Summarising - it's impossible to know what programmers want to write, because the moment you find it and make it easier, they want to write another thing.
This is a dilemma that often rears its head (exemplified by the classic "42").
Intuitively difficult situations usually end up "conserving their difficulty." An easy solution means a difficult problem (hard to formulate the problem or the formalization of the problem is usually inapplicable). An easy problem usually means a difficult solution.
Is the idea behind the 33 yes-no questions thing a combinatorial point, though? World population is 7.5 billion, the smallest natural number n such that 2^n exceeds that number is 33.
So it shows you can uniquely identify any human with 33 yes-no questions, but reaching a member of a set isn't necessarily analogous to reaching a state of understanding.
Well they could be marginally interpretable, but yes they'd all still feel really artificial, extrinsic, and prone to change.
You could for example, have each of those questions be a series of geographical questions that eventually narrow down to a single point in space (you'll need to include z-axis questions as well). But the geographical regions cut out by each question will be insanely artificial.
Intuitively "satisfying" questions would probably require intrinsic questions, i.e. ones that could pick out individuals from the entire sea of hypothetical humans, living or dead, real or imagined.
> They say you can narrow down from all of humanity to a single person with just 33 yes/no questions.
For anyone who hasn't seen it before, Akinator [0] is based on this concept. Imagine a fictional character (or youtuber) and answer a series of questions, and the genie will figure it out.
(It works with just about any character at this point because if users find a path that doesn't result in a correct answer, they can add the character they were imagining)
That assumes you can ask the questions at exactly the same instance of time, you'd probably need more in real life, how many more?
Nothing is absolute in programming - anyone who's tried to uniquely identify a person in a database has struck this problem. For instance there was a time when gender was part of the key for a person in a database - not the case now.
Not just what people want to write, but what they are actually ready to pay for. Despite the mobile revolution altering some equilibria at the hardware level, software-making is still fundamentally led by business requirements, by the capitalistic research for efficiency gains in money-making. This can be observed in the disconnect between educational software-making (scratch etc) and "real" software-making.
Those generalizations about why change is hard can be applied to any change in every field. Ideas are always decades if not centuries ahead of their implementations. Not even ideas, but proof of concepts too.
So lets talk specifically about software. The biggest problem that prevents moving forward are platforms and companies. Yes, businesses and organizations are usually the biggest factor in dampening progress because they have so much mass it influences everything around them. But you would think that with the low barrier to entry in software, anyone could just spin up some OSS project and it would quickly trickle it's way through. Even slow organizations are often quick to adopt new tech in certain conditions. There were fintech companies that adopted Docker really early on. But it's usually something that's isolated.
It's platforms that hold us back. The reason we don't see huge jumps in things like functional programming is because that stuff is great for parallelism, and we don't see jumps in parallelism because the OS, or game engine, or drivers, or chips have poor support. We could have better functional languages if browsers decided to support more functional principles in ecmascript, or even some other scripting language. We don't have all the cool programming stuff because it's piles of abstractions with groups that don't want to coordinate.
Sure there are other factors - there are cultural issues that lead to programmers making bad decisions and writing bad code. Java for example was a good language that got ruined by horrible enterprise culture. It's not like people didn't point out the flaws early on, but they got ignored. Lots of people took microservices are a good idea to mean "everything must be a microservice". But even then these problems are perpetuated by companies that mishandle their platform. If companies made user guides clearer we wouldn't have these issues. Not only is the communication bad, but it's often in arcane places. To figure out how to use something you have to read through github discussions or follow mailing lists to get useful information.
If rungs to your ladder are broken, you're not going to be able to climb very high.
Java was a good syntax. The language itself had way too many limitations back when Sun ran it, for really dumb reasons proffered by too smart for their own good people. I was sadly paying attention to the wretched arguments about how there was literally no way to reasonably make first class functions work, or closures, or value types, or escape analysis, or any of the million things that C# just said yes to from day one which worked out fine. It was a lot of edge case kvetching that really needed a dictator to cut through, but Sun were terrible stewards and let the worrywarts run the show.
The enterprise culture could I guess be blamed, but Sun was a shite enterprise company when it came to financials, you'd think MS would be much worse in terms of command/control but C# ended up as a damn good version of Java, and once Oracle grabbed the ball hairs Java got pretty great. So it's not the whole problem.
Is Swift really the second most popular language since 2000?
Golly.
I have been using Swift for some months now. It is more modern than C/C++, sure, but it really suffers from lack of development.
The ergonomics of it is poor. Especially compared to Rust. Some design choices are quite odd (the definition of a constant defined with `let` for exampe)
It is the only easily available choice for developing on Apple, AFAICT. Hence it is so popular.
As a fresh developer on Apple, but grizzled and grey, I must say teh development tools and ecosystem reminds me of Linux in 1997. Everything mostly works
I'd say the progression from machine code to Python is a fundamental improvement.
For simple cases, I think a visual/textual code interface is possible, but for any complicated case the visual interface will become inscrutable. Maybe an optimal tradeoff is like how game engines allow scripting languages to live on top. A number of 3D engines use this paradigm with a visual language in place of scripting language.
“In my experience developers are very rational beings.”
Yeah, no. Stopped there. Vanity is rationalizing the belief that you’re any better at being rational than the rest of us inchoate apes.
There are many technical, logistical, and political arguments as to why improvement (in any sphere) is hard, but if you’re not going to start with “It’s Us, The People” then where usefully are you going to start?
Perhaps it's not that developers are more rational, but that they are better at expressing logical-sounding reasons that they don't want to change.
It's also possible to see developers as more willing to change (more "rational") than others when something new can help them get their job done better. If nothing else, developers seem to like trying out new languages and tools for novelty's sake, even if it's not because they're actively trying to improve.
I buy the thesis as stated; there's a common attitude among researchers of, "why don't we have this already?" that seems counter-productive.
However, there's an adjacent thesis that's also worth stating: fundamentally improving programming is hard, but still worth attempting. There's just too much risk with the current status quo. We've built up our world atop a brittle, insecure, unstable tower of software, and it doesn't feel unreasonable to ask if it might lose capability over time[1].
The good news: you don't have to give it all up all at once and return to the stone age to try something new, as OP says. There's nothing stopping us from using the present as we like to build the future. The key, it seems to me, is to prevent the prospective future from inter-operating with the present.
You won't get lots of users this way, but I think jettisoning the accumulated compatibility baggage of 50+ years of mainstream software might free us up to try new things. Even if the world doesn't switch to it en masse, it seems useful to diversify our eggs into multiple baskets.
Here's what I work on: https://github.com/akkartik/mu. It's a computer built up from machine code, designed to be unportable and run on a single processor family. It uses C early on, but it tries to escape it as quickly as possible. Mu programs can bootstrap up from C, but they can also build entirely without C, generating identical binaries either way[2]. (They do still need Linux at the moment. Just a kernel, nothing more like libc.)
I call this ethos "barbarian programming", inspired by Veblen's use of the term[3]. I rely on artifacts (my computer, my build tools, my browser, my community support system, the list is long) from the surrounding "settled" mainstream, but I try to not limit myself to its norms (compatibility, social discouragement of forking, etc.). I'm researching a new, better, more anti-fragile way to collaborate with others.
Here's a 2-minute video I just recorded, experimenting with a new kind of live-updating, purely-text-mode shell built atop the Mu computer: https://archive.org/details/akkartik-2min-2020-12-06. The shell is implemented in a memory-safe programming language that is translated almost 1:1 to 32-bit x86 machine code. Without any intervening libraries or dependencies except (a handful of syscalls from) a Linux kernel.
I wonder how the author would feel about the low code solutions that are catching on in enterprises. For example outsystems seems to be changing how programming works while letting people use languages they are familiar with.
So much this. We have had pure well-typed functional programming languages for decades, and they are great at forcing clean designs and catching bugs at build time, and yet they're still niche tools for the most part.
One of the reasons for the 80/20 rule, specifically that it takes 20% of the time to make 80% of the product and the remaining 80% of the time to make the remaining 20% of the product, is that we perceive most of the accidental complexity to be in that final 20%: technical debt, localization edge cases, bugs, misuses that turn our products into spying tools, etc.
I think that accidental complexity isn't a software paradigm limitation. It's a human thought limitation. It takes time for human organizations to learn how to handle the inevitable yet unanticipated issues and exceptions to the rule, and it takes time to turn them into processes and then to automate them.
The ecosystem delivers a lot of pre-packaged essential complexity - e.g. only now Rust is starting to get the mature libraries to handle difficult domain-specific things. It is hard to program from scratch (no pun intended)
From previous generational leaps, we’ve learned that the users post-leap don’t look like the pre-leap users at all. The iPod’s introduction brought about a generation of new digital music users that didn’t look like the Limewire generation, and the iPhone’s average user didn’t look like the average user of the BlackBerry before it.
Modern programming is at the core of HN, and of most of SV, sure. That said, we should still be the first to realize that a successful, fundamentally new way to program would target a new generation and idea of software maker, one that won’t look like the modern developer at all.