The author is fighting a strawman. Rather than engage with the specific problems these solutions were built to solve they dismissively regard them as just flavor of the week trends purely for the sake of chasing newness. This is true of the entire post, but I'll tackle just one since it's emblematic of my issues with all the rest:
The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper". Hiring experienced desktop application devs to build a quality native app for each platform is going to be expensive, hiring a few JS bootcampers to build one react UI that works on every platform is extremely cheap - shittier performance is the tradeoff to instantly have access to every platform. It's not a coincidence that Electron apps like e.g. Slack, Spotify, Discord are massively dominant players in their markets, I doubt you'd look the engineering leads of these companies in the face and tell them that you believe they put no thought into the tradeoffs of Electron and that they're just following trends.
I think that's what's offensive to the purist engineer mind. These aren't engineering decisions, they're business decisions (or maybe the closest thing you could call them would be "financial engineering" decisions). The best thing, by pure engineering criteria, is not what gets built. The best thing for the business is what gets built. In that context it's a complaint as old as the hills.
...and not specific to computers either, or even the private sector. In a past career I was a civil engineer, and I learned about how traffic light green times (how many seconds a signal stays green for one direction vs. the cross street) usually have a "right answer" from an engineering standpoint, which as you can imagine, has to do with maximizing traffic throughput and/or minimizing delays to each individual vehicle. And then the mayor of the town shows up out of the blue and says no, this direction is going to have a longer wait, I'm going to delay people intentionally, so that when they're stuck in traffic, sitting there, they'll have more of a chance to see signs/placards of, and contemplate patronizing, nearby businesses, whose owners are my political backers. That made a big impression on me... must have, because I feel like I've told that story in an HN comment before... but anyway... yep.
> These aren't engineering decisions, they're business decisions
All engineering (including "real" ones like civil or mechanical) is about fitting your requirements within budgets. When one's building a skyscraper, you try to build it in a way that minimizes the cost while satisfying all requirements (it shouldn't fall down given such and such conditions, etc.)
Engineering is largely the art of solving problems that fit in real-world constraints and one such [major] constraint it cost. When you build software, you're not given 10 years, a research team and infinite money. You're given just enough resources to build something that satisfies requirements and is the lowest cost (financially and temporally). There's always a way to build things "better" but that's not the point.
Your quote is better, but for people that have a hard time with that analogy, I like using the example of building a 4-story concrete/brick building - anyone can make a building stand strong by filling it with concrete / bricks. It takes precise engineering to know how thin you can make the walls/supports to make that building worth building.
If you just mindlessly fill everything with bricks and concrete, you likely need to make the lower walls thicker than the upper ones to support the weight...
The whole engineering process can be done without ever looking at the budget as long as the framework is given ("use these materials", "you have this much space").
Feels like everyone is very disinterested in the cost to the user with most "modern ways" of doing things. Many webpages use far too much resource to just communicate text. Web adverts are especially awful for this. I could browse the web with dialup and a machine a thousandth as powerful as my phone. So as a consumer why do I need to "upgrade" what was the added value above Google scale is delivering computationally intense adverts I then waste cycles blocking?
Of course they are, because the user does not reward them for being sparing of resources that are, frankly, insignificant on modern devices. Might be different in you were operating in a market where people were largely relying on very low-powered devices, though.
That's a circular argument, because "on modern devices" moves the goalposts every year. I still have a Galaxy Note II, which I would definitively classify as a modern device. But some webpages have terrible performance on it, because anything older than 3 years is not worth optimizing for.
Same thing with old iPhones: the only reason they don't work reliably today is because each piece of "modern software" is less efficient than yesteryear's equivalent, and the constant redefining of what constitutes a "modern device" is what perpetuates the problem.
This isn't an engineering problem, it's a consumerist dystopia. The computing market is encouraging wastefulness in all dimensions: useful device lifetime, computing costs (bitcoin farming anyone?), software efficiency, framework longevity (jquery, angular, meteor, react), dependency management (NodeJS). None of those problems are new, but every new entrant seems hell-bent on outdoing the wastefulness of its previous incarnation.
Perhaps a circular process. Not a circular argument. It's just reality that users reward optimization they can't really notice less than features. In places where it does matter to the bottom line, like how fast a shopping page loads, the optimization is prioritized.
But the cost to the user is not considered, even in the real world. For example, transportation decisions are made in the airline industry, and they rarely consider how much this will financially impact the lives of passengers. The same pattern is replicated everywhere: companies work to maximize their profits, while their clients have to deal with possible financial losses.
Desktop OS's are not appreciably faster now than they were 30 years ago, despite exponential rises in memory, storage and processor capacity. Better looking, yes, but not faster.
Games have maintained roughly the same performance for 20 years. However, the number of polygons has risen exponentially.
There's a minimum acceptable performance for most systems. If you exceed that, no-one cares (the extra performance is wasted). So every system hits that minimum performance level and spends the rest of the budget on appearance (or configurability, or ease of editing, or whatever).
Wordpress is slower than flat HTML. But the average web browser and connection is fast enough to serve Wordpress content in an acceptable time for the user. The "extra" budget was spent on making it easier to create the content.
> Feels like everyone is very disinterested in the cost to the user with most "modern ways" of doing things.
The only time a decision maker actually cares about the cost to the user is when they are trying to calculate how much money (or savings) they could capture from the user.
Proper engineering also takes into account the end users, even the ones that aren't paying for it. If other engineering disciplines were as hell-bent on cutting corners as SWE, these situations would be common-place:
- "cars older than 5 years are not supported on this bridge"
- "Sure, we can perform maintenance on this bridge, but with every round of maintenance work, the maximum speed across the bridge will be lowered by 5mph"
- "No, I'm sorry, you can't call our office from a landline. We only accept support calls from a Sprint mobile device, and the contract must be less than a year old"
- "What do you mean, you need to replace a fuse? You can't do that, they're built-in. Please contact our office to buy a new house".
- "Sure, our new Model T does 5 gallons to the mile, but look at its color! It's not just black, it's Vanta Black (tm)! And look at these rounded bumpers, aren't they a marvel of engineering?"
I wouldn’t be surprised if solving the problem with the Mayors constraints increased costs though.
I have a few stories where senior management, or a preferred developer had some pet project. Or crazy ideology. So we adopted that dispute protests. If we picked a standard way we would have been done faster, cheaper and with fewer bugs.
But you can do this satisfying requirements with different interpretations: here in HN we cry and whine when database queries are not efficient, when code is suboptimal, when sites are hacked and leak info and many more things. These things were all probably fine within the requirements. Just like this crap experience for the user with Electron. So where do I draw the line? And why?
Ok. What is the art of building things that are good, but violate some constraints of the real world? I’d like to use the products from that discipline.
Jokes aside, do you really not understand what GP means? "Real-world constraints" are the things like having to make money or having to do something with only 2 other programmers to help you. Nobody's claiming that, for example, academics (who don't need to have products that money) don't have other problems (like grant writing or juggling academic responsibilities), but that isn't what people mean by "real-world constraints".
> What is the art of building things that are good, but violate some constraints of the real world?
“The romans were so much better at building stuff”.
You could make buildings matching those, they’d be massively over-engineered and overpriced compared to the modern equivalent, and significantly more limited in suitable locations.
also within time constraints, building an electron app in the scenario outlined is probably a lot quicker. Normally building something quicker implies that later maintainability is adversely affected, but in this case the tradeoff is not time vs. maintainability, but time vs. performance.
Performance has generally been getting the short end of the stick since early Windows days anyway when Gates had people build not for what a computer could handle when the application was built, but rather what it could handle a year down the road.
>When you build software, you're not given 10 years, a research team and infinite money.
So actually you don't need all that to identify when there is a bird in the picture anymore, what's the new bird in a picture requirement? https://xkcd.com/1425/
> also within time constraints, building an electron app in the scenario outlined is probably a lot quicker.
But building it in Free Pascal/ Lazarus is (almost?) as portable across systems, probably at least as quick, and comes without all the horrendous overhead bloat.
Its best to understand early that an Engineer deals in two commodities: money and time. Their job is not to create ivory-tower ideal solutions. The whole problem, the reason Engineers exists, is to manage technology such that the constraints of money and time can be met.
You know, I hear this argument a lot for e.g. why Slack is a dog slow electron beast.
The thing is, slack has not been short of time OR money for at least the last 5 years. They could have easily built leaner, faster native apps and customers would have appreciated it.
This "ivory tower" stuff smacks of just world/just market fallacy.
I suspect there are deeper issues here. For example, vendor lock-in inhibiting competition paired with the "electron app team" being a powerful fiefdom within the company.
IME the concealed undercurrent of politics tends to provide a better explanation for many hard to understand decisions at large companies than unexpected technical prudence. Why does Uber's app require 150 developers? Why is Facebook's app such a beast? Why did they create a whole new "lite" app rather than slimming it? Why does Google have 13 messengers?
Reaching for a calm, rational engineering decision to explain each case is tempting, but wrong.
So once you have an app up and running, with millions of satisfied users you would hire another team of more expensive developers to build a native app.. for each platform? so make that 2 more teams or rather 4 more teams? (iOS, Mac, Android, Windows)
Telegram already does this with a rather small team. Hell, there are even third-party Telegram clients that, while a bit behind the features, eventually catch up with every single one.
You don't get to be a billion dollar company by spending money unnecessarily. The question is never the total size of the company the cost relative to other options.
See, this is interesting to me, because I come from a different engineering background and was taught that the constraints were safety first, then cost. Time to build was a part but not as important as those other two.
Obviously safety is a bit less important in the software world, but hearing time be a key part of an engineer’s focus makes me uneasy for some reason, even though it makes a lot of sense from a business perspective.
The saying about optimization goes, "Time, Cost, Quality: Choose two." You've chosen the two that most profit-minded entities choose, but they are not the only choices.
Optimizing for quality and time can be a winning solution, particularly if you can achieve novelty as well. Countless corporations and founders have had success with the strategy (Apple under Jobs comes to mind.) Optimizing for quality and cost is also viable: Deming preached it, and e.g., Japanese car manufacturers have showed it to be a highly successful strategy.
yeah but the problem is: which budget are you optimizing? That of the company of course. You offload a lot of bullcrap onto the end user. Like the author said, wasting electricity and battery life and thousands of devices just because you cant be bothered to generate html on the server to serve simple text is a common thing for news sites.
This is wasting the time as well of the end user, because its slower to load, and lower performant on the device itself
> yeah but the problem is: which budget are you optimizing? That of the company of course.
The budget of the company is predicated upon what the user will be willing to pay for the service.
You’re wasting electricity and battery life of thousands of devices of users who don’t care or want to “vote with their feet”.
Now obviously users are also constrained by what is made available to them, but time and again the shiny and early to market wins out even if it’s a literal trash fire…
Let's not pretend that such business decisions are following rules written down on stone tablets passed from above. They're nothing more than going with the flow and following the path of least resistance in a market that painted itself into a corner of inefficiency and technical inferiority. Chiefly because of Google's empire building, VC money and the quest for millions of users.
Those two facets of quality, or the lack thereof in the applications which are being discussed are undeniable and the offence will continue to be perceived until the underlying problems are resolved.
Watching the rhetoric of the web dev community over the years, I have the impression that previously they thought that the one big technical improvement will come and that will allow them to compete on equal footing with native applications. Now they've given up and just mumble something about Electron being cheaper - an improvement, even if far from the ideal :-)
> And then the mayor of the town shows up out of the blue and says no, this direction is going to have a longer wait, I'm going to delay people intentionally, so that when they're stuck in traffic, sitting there, they'll have more of a chance to see signs/placards of, and contemplate patronizing, nearby businesses, whose owners are my political backers.
You gotta be kidding me! Not only are we subjected to this constant visual pollution in the form of advertising but these politicians deliberately make traffic less efficient in order to force us to "contemplate" this garbage?
That's extremely offensive and not just in the purist engineer sense. The audacity of these people to think they can literally hold you down and force their little commercial interests into you.
> the mayor of the town shows up out of the blue and says no, this direction is going to have a longer wait, I'm going to delay people intentionally, so that when they're stuck in traffic, sitting there, they'll have more of a chance to see signs/placards of, and contemplate patronizing, nearby businesses,
> [...] usually have a "right answer" from an engineering standpoint, which as you can imagine, has to do with maximizing traffic throughput and/or minimizing delays to each individual vehicle [...]
Another common problem is when an Engineer is told to maximise a specific variable, rather than apply their wisdom to determine _which_ variable to optimise.
Optimising for more cars usually means that public transport can't improve, and moving around gets worse for pedestrian/bikes. Ultimately, mobility does not improve, and you just need more cars and more car lanes. Car flow improves, but nobody stops to think if that's really the most important thing to improve.
> "The best thing, by pure engineering criteria, is not what gets built."
How do you define the "best thing, by pure engineering criteria"?
The least memory/CPU? Or the smallest download size?
These are somewhat irrelevant vanity metrics in most cases... they're especially vain, if optimising them loses you potential clients and makes the project a failure...
Well it's a net negative for the end user, which should have been the priority for everyone involved, and what you're describing sounds like a corruption of this system. So I wouldn't call it an engineering thing, it seems far more general than that.
>> Well it's a net negative for the end user, which should have been the priority for everyone involved
I feel like you're missing the point. Let's say you are a product manager. Here are your options;
A) increase the product price so the budget can be bigger. Understand that increasing the price will lower customer numbers, but net inflows may be larger. (does this option prioritise the end user?)
B) spend all your budget on platforms with the most usage. Ignore less-used platforms completely, but deliver the best possible for your selected platforms. So 100% spend on the Windows client - - and of course ignore Apple and Linux desktop. (does this prioritise the end user?)
C) build a cross-platform solution that is sub-optimal on all platforms, but targets multiple platforms on a limited budget. This reaches the most people, but the experience from an engineering point of view is less than ideal for all of them. (does this prioritise the end user?)
How you answer depends on who you consider the end user to be.
If you see them as "people who have bought our system on our preferred platform" then option B makes the most sense.
If you see them as "people who could use our system to improve their lives" then option C makes the most senses.
If you see them as people who can afford fine engineering then go with option A.
On the up side, no matter which one you choose you are prioritising the end user, so it fulfills your premise.
I don't consider it a "net negative" that I can use a bunch of stuff on every device I own rather than just one or two. And frankly, if you happened to be involved in the sysadmin world during the long, slow death of Windows XP, you might even appreciate the benefits of Web apps even from the technical standpoint.
I'm sure COBOL programmers thought the same. Some day all the existing QT programs will need people to convert them to something else. Keep those skills sharp.
Indeed it is, however the end user is probably not the customer of the business. The business is serving its customers. Sometimes end user = customer. But often they are different sets of stakeholders with their own priorities. I'm not saying businesses go out of their way to piss off end users, but customer needs trump end user needs.
I’m very critical when a company like Slack cannot find the time and resources to make native clients. I’m so #%^*ing tired of it taking three Mississippis to show a channel I click on.
But on the whole, I don’t think the armchair critics truly appreciate how Electron reduces the cost by at least an order of magnitude. It’s a brilliant tool for shipping early and fast. My only criticism with these start-up use cases is that these Electron apps can often just be websites.
I can’t think of any reason not to use Slack exclusively as a website. I’ve been doing it for years now, and it’s a better experience IMO. For Teams, the only disadvantage is you can’t share your screen and video simultaneously when running as a tab; but I find the screen sharing experience to be better otherwise when running as a browser tab.
Also: all of these “shitty” Electron apps work just as well on Linux as they do everywhere else. That is a huge advantage. I’ve got a significant amount of software that is realistically only on Linux because of electron.
Ii run Slack outside of a tab and in its app format because it's easier to navigate to using keyboard shortcuts and OS level features. That may not be enough for you, but it's plenty reason for me, and I imagine many others.
With Chrome or Edge, you can take any web page and turn it into a standalone "app" that lives in the start menu and can be pinned to the taskbar and Alt-Tabbed to, etc.
In the Chrome menu, select More Tools / Create Shortcut... and edit the title and check the Open as Window box.
In Edge, select Apps / Install this site as an app. Edge has a lot more options here than Chrome. You can pin it to the Taskbar or Start, create a desktop shortcut, and set it to auto-start. (You can do those manually with a Chrome shortcut if you know your way around the Start menu and Startup folder, but it's easier in Edge.)
I don't see a similar option in Firefox. No idea about Safari. If anyone knows about those browsers, please comment.
And having posted this comment, I am now wondering why I am still running the Slack app. I think I will try making it an Edge app!
Some people in the thread said they preferred running Slack in a browser tab instead of the Electron app, perhaps to avoid the memory overhead of a separate browser instance.
The comment I replied to described a preference for the app because of OS navigation, e.g. Alt+Tab, a separate taskbar icon, etc.
This Chrome shortcut or Edge app trick lets you combine these approaches, running the website in your browser but giving the site its own main window that you can pin to the taskbar, set to auto-run, and navigate to like a native app.
I would have guessed the memory consumption would be similar given the sandboxed tab model of Chrome. Anyway... even if not, do any of you guys really find that the memory consumption of Slack is a problem for you in practice?
Not a Slack user here. Is the app version _slower_ than the website? I assumed that since Electron is only a little bit more than a Chromium wrapper that they would have identical performance.
One Chromium runtime managing five tabs is more efficient than five Chromium runtimes running one site each (because those instances aren't able to share anything with one another).
Long time slack app user here, I don’t know how you guys organize your slack but performance has never been an issue for our organization, I don’t think it would run any better then it currently does if it were an os native app, of course my machine isn’t a toaster.
Lots of tools are better as websites - however underhand motives often mean mobile apps get pushed more and have done for years. Inevitably someone pops up to say "but what about essential feature X that you can't do well on the web" even though it's rarely genuinely essential or it's a worthwhile trade-off.
My GP surgery uses a fantastic website for streamlining comms where you'd never want to install an app that would be rarely used. It's so well integrated that they could ping me the link during a call and I had details back to them seconds later so I got a decision there and then as we spoke rather than breaking flow state and delaying it by days as they used to.
Funnily enough my dentist has a similar kind of app relating to pre-appt preparation - perhaps there's a fight back for common sense in these slightly more serious fields?
There are plenty of native mobile apps out there that are better off as web apps.
If it's a service that people use constantly, then there may be some merits in using an app.
But more often that not there are plenty of things people use once-in-a-long-while that should probably be just a website.
For instance I know a place (outside the US, not naming the place because that's not relevant) that has a mobile app just to highlight the local tourist spots. It just shows pictures and text description. There's no AR/VR stuff or anything. Neither can it reserve tickets for any of those places. In my opinion that should just have been a static website!
>For Teams, the only disadvantage is you can’t share your screen and video simultaneously when running as a tab; but I find the screen sharing experience to be better otherwise when running as a browser tab.
did they fix multi-stream video yet? last time I checked in the browser version you could only see one video stream (ie. webcam), whereas on the desktop app you could see multiple.
Last I used it I could see multiple people simultaneously. I could even do Together Mode, but it seemed to be rendered on a server somewhere and streamed, rather than done locally. Maybe they were doing that with webcams and I just didn’t notice.
The main reason I was using the browser was so I could use Stylus to make it compact just like Slack. That’s another benefit of the browser approach, but ideally it would not be so obviously inferior to competitors.
I disagree, Teams in Linux (it might be on purpose) the app is way cut out, you can't request control (or was a few months ago), you can see less simultaneous videos at the same time (iirc was 2 or 4), so no, with the app you have almost zero advantages in Linux over the browser version.
Maybe desktop Linux should get its act together to make a target that everyone else actually feels like targeting instead of grasping to the relevance of someone dropping Electron bombs as the true path to happiness, or emulating Windows games for that matter.
Usually they are websites. That's kinda the whole point... Spotify's desktop app is the same as open.spotify.com, Discord's is the same as discord.com, VS Code's is the same as vscode.dev, Slack is the same as.. something. But yeah, instead of those companies all needing 5 separate engineering teams (Web, Mac, Windows, iOS, Android, and you're joking if you think they'll make a Linux native app), they just make a website. You're welcome to use the websites instead of the desktop apps for any of those, though you'll lose some OS integration capability.
I think it's a huge pity that we're so reliant on platforms instead of protocols - there would be native Slack clients for every platform. Few companies even allow proper API access.
Small companies loves protocols, but as they become big and dominant they start pivoting to platforms to exert control and extract more profits from the ecosystem.
We need regulations around platforms, protocols are inherently more powerful and from the perspective of humanity almost always preferable over platforms, so the incentives of companies should reflect that, but it isn't right now.
Stuff like email, http etc is what makes computers great. I doubt anything like that could get invented today, companies would just create their own proprietary protocols and there would be no web browsers. Imagine if we just continued along that route and made protocols for everything instead of having big companies lock down computing habits via platforms.
There are tradeoffs beyond just the company making more money, as Moxie's recent web3 assessment covered. Protocols take much longer to evolve, platforms are quick to evolve.
Allowing platforms to innovate but somehow incentivizingnor requiring opening up what they've built seems like it would be the best of both worlds, but I'm not sure how that could be done well.
If people actually paid for usage, that would be one thing, because then it would be all about getting people to use your platform so you made more money, and having others innovate on too of it that you could fold into it would be cost effective. In the world of free APIs and services, it's all about keeping control of every bit.
Maybe all it needs is a nudge to make the decision to hoard all data no longer worth while. Make companies liable for PII that's lost and make someone's PII their own property only to be used with their agreement and maybe the rest will start to fall out of it.
The UK open banking regulations are a good model to follow.
The problem is the intersection of extreme profits at stake and the public's lack of understanding of the underlying causes. That's not a recipe for good political outcomes.
In the case of the UK open banking regulations I suspect tech lobbying (powerful) might have trumped high street bank lobbying (not as powerful).
> Stuff like email, http etc is what makes computers great.
I concur with this. I've gotten into IRC recently, and have started reading RFC's/specs for various protocols instead of ducking "How to do X", and it's both greatly increased my enjoyment of computing, and my knowledge.
Crazy idea that I'm sure isn't an original thought: instead of adapting the languages to deal with abstracting the idiosyncrasies of each OS, change the OSes to expose a universal API to make everything else lighter.
Cross platform 'windowing toolkit' APIs are what's missing. At this point I'd be happy even if Microsoft, Apple, and UNIX likes (who would probably just implement any free open standard anyway) could agree on even ONE such interface.
Postscript as a UI language? Fine as long as they all do it.
HTML5+ as a UI language? (E.G. like the failed HP Fire or Firefox) Fine as long as they all do it.
Some new thing that isn't a complete dumpster-fire but is included everywhere and a free for anyone to implement interface? Fine as long as they all do it.
We have NO shortage of programming languages that can be cross compiled to different operating systems. Filesystem differences can be annoying but you can work around those. Entirely different user interaction paths? That's the sticking point.
That would be ideal, but there are huge differences between the window implementations on different platforms.
One really obvious example (maybe no longer current - it's been a while since I spent any time on Windows) is that OS X windows are responsive even without focus. So you can scroll around background windows just by hovering without having to click first. Windows doesn't support this.
The bigger problem is that the data structures for menus, chrome, and the rest are completely different. An OS X menu is nothing like a Windows menu is nothing like a Linux menu.
There are systems like Tcl/Tk and Qt which act as middleware between GUI descriptions and specific OS bindings, and they kind of work. But they force devs to learn a separate intermediate language and (IMO) they look crude compared to real native apps.
Of course there are also business reasons why GUI convergence won't happen.
If those don't go altogether stagnant then usually what happens is there are a bunch of features that only work in the "official" implementation so it's a degraded experience anyway.
I'm kinda hopeful that Maui+Blazor will eventually let us have our cake and eat it too - instead of "everything is web", do "everything is a desktop-style gui framework" including the web. Although AFAIK they don't even have a web renderer on the horizon right now - purely desktop+mobile.
Of course, there will be angry PMs and designers that will be upset that their website looks like it was spat out of a desktop GUI framework instead of looking exactly how they want it to look. I remember people whining about how desktop-looking ExtJS was back in the day... this was for an inventory-management application, shouldn't it look like a desktop productivity app?
> Should it not be possible to style a native GUI application with the same framework language as for a website?
Not if the native controls are strongly opinionated as to style language, such that there is limited themability, unless you go to the level that your “native GUI” is doing the equivalent of drawing and reading mouse touches from a canvas to implement a custom UI rather than using more specific native controls.
Isn't that basically what Flutter does? In my experience, it works well enough for mobile, although you still have people saying it's not "native enough." It's less stable on web and desktop though but it works.
Flutter on web is basically unusable for anything that involves text or links. The browser has so much native functionality here that trying to reimplement it is futile.
> You're welcome to use the websites instead of the desktop apps for any of those, though you'll lose some OS integration capability.
On the flip side, when you use them as websites, you gain the benefit of extending them through browser add-ons and extensions. An example where I work is that everyone uses Asana’s website because our Everhour extension works in Chrome but there’s no equivalent for Asana’s desktop app.
This gave me the idea to use uBlock Origin to hide Spotify's "Episodes for you" section... a simple `open.spotify.com##section[aria-label="Episodes for you"]` and I don't have to deal with their spam ever again!
open.spotify.com##section[aria-label="Episodes for you"]
open.spotify.com##section[aria-label="Shows you might like"]
open.spotify.com##section[aria-label="Shows to try"]
open.spotify.com##section[aria-label="Funny LGBTQ+ Podcasters"]
This may not end up being the cakewalk I was expecting.
VSCode is also appreciably different both in capability and implementation, https://franz-ajit.medium.com/understanding-visual-studio-co... but interesting that V8 debug protocol is becoming more common, so maybe that will come to a browser some day :)
Slack isn't feature complete in their webapp if they even still have it. vscode.dev is new, the app existed for a long time. In practice, its the reverse; it allows you to turn an app into a website and add yet another supported platform.
This is something I think many HN'ers don't understand. For a good amount of modern web functionality, being supported in Firefox is less a matter of "do testing in firefox and make sure things work there" as much as it is "have a very popular product and good marketing team so that you can convince the firefox team to whitelist your domain in their internal maps of `what cool things should X domain be allowed to do`". People see that Slack works in Firefox, but Joe's Voice Calls.Com doesn't, and blame Joe for not supporting firefox. In reality it's firefox that doesn't support Joe.
You misunderstand, you have the situation exactly backwards. The problem isn't that Firefox doesn't support these APIs, it does support them and the problem isn't that they are limited to specific domains, they aren't. The problem is that the Slack code is `if isChrome() { enableCalls() }`. Mozilla has to spoof the user-agent to bypass this check. Joe's Voice Calls.com will just do no conditional or do feature-detection rather than user-agent checks and everything will work fine.
The exception is the list of domains that allow auto-play video with sound by default (like YouTube) but I think there are very few instances of this.
Ah, indeed I misunderstood. Though to be fair, at $DAY_JOB we had to do exactly the process I described above to access some API (not auto-play) without a litany of obnoxious prompts. If anyone at a smaller company tried to do what we were doing, they'd be 100% unable to provide a usable experience for Firefox users.
Hey, look, it's simple: just spend $5k on a new top-of-the-line M1 Max MacBook, then it'll be as fast as IRC was on a 133Mhz MMX Pentium 1.
Slack is great; it's like "hey, we took IRC and added inline images, now it's three orders of magnitude slower". Google Sheets is great in the same way: "hey, we took Excel and removed half the features plus now it's a few orders of magnitude slower BUT EASIER TO SHARE THINGS". Spotify is similar: "Like WinAmp, but slow, but with a subscription you can listen to most of what you want immediately and share things easily." These are in ascending order of their actual value compared to their predecessors, I guess.
Distributing the load of webapps makes sense if you've ever had to run a few thousand web servers running PHP apps, but yeah, running a framework server-side to render the page again and calling it modern is just kinda...the same thing we had before with a fresh coat of paint and more levels of abstraction.
Back in the day, we had CPAN, PEAR, and PECL. Now we have NPM and approximately 18 trillion modules. You can get stuff that does all kinds of useful types of manipulation, but under the hood some of it is still just a wrapper around some combination of zlib, FFmpeg, OpenSSL, and maybe Boost, plus TensorFlow or OpenCV (which can use FFmpeg) these days.
The amount of truly novel problems we're solving still seems low. "what Andy giveth, Bill taketh away", or I guess these days, "What Lisa giveth, Sundar taketh away"?
IRC isn’t even remotely competitive vs Slack for 99% of its users. It’s the same ignorant take you hear from people who complain about Excel files. The overwhelming majority of users do not experience the cosmos at all like you do.
One thing about Sheets vs. Excel: Appscript for automation is a lot easier to get set up with than VBA ever was to the point where my org has a ton of things that we just throw a small amount of Appscript at (E.G. generating performance reports) because it just works and it doesn't require a lot of processing power (yay, cloud!)
Are you suggesting that a massive company should run their own IRC server, complete with bots that handle OAuth and access control, history saving, document downloading?
I'm not suggesting anything really. If anything I'd be happier for even more asynchronous collaboration via email, and realtime chat available as an exception.
In the thread chain I was just curious why people think IRC could not be a Slack replacement.
Related to your question about these integration services.
Running an IRC server is definitely not complicated. Bots which implement authentication/authorization already exist (NickServ/ChanServ/X). History saving is also common, and for in-client message history a bouncer is a pretty good tool to have. Document downloading, you have DCC, or yet other bots you can query/download from (pretty common in the mp3 download era).
If there was incentive someone could create a bundled package that unifies all these different components + a more modern client interface and people would not have any issue using.
It's not in the problem space I'm interested to work on, but functionally even the more restricted existing IRC offering would be something I'd rather use than the gif meme filled, emoji overdosed chats I've had on Slack.
On the other hand, I've been an independent contractor for a couple of years now and I had the pleasure of not being part of large team slack channels anymore anyway.
> I’m very critical when a company like Slack cannot find the time and resources to make native clients.
It's not a matter of finding resources, it's a matter of finding that to be the best use of resources. Obviously they could have native apps for major desktop platforms with the energy put into the Electron app, and performance would probably be better, but would they have feature parity? And if you increase the resources, would the best use of that increase be into maintaining native apps for each platform of Electron?
I believe feature parity is less important than the basic HCU of a responsive interface for the most basic features. Some of the most recent features they've added, like the "WYSIWYG"ification of the input box were arguably more disruptive to my use of Slack than any benefit it offered. If a good baseline performance for basic features like chat, image sharing and calls are achieved, then later into the lives of these clients the kinds of features that would lose out on parity ought to be more niche. Things like the presence of huddles (arguably just a different kind of call), bot command interface tooltips, and the ability to draw on shared screen.
Maintenance is a real challenge, but one that delivers the value of good HCU and performance on dev machines; a worthwhile operating cost to provide a product that is harder to unseat. Better performing programs also save memory for the rest of the user's programs like containers and IDEs, which can get pretty resource-hungry.
I think with the recent chip shortage, supply chain issues and potential catastrophes in the future with Taiwan, there's a pinch that will force us to be more resource efficient on a per-machine basis at least for a while. And we may find out that there are miraculous things we can do on the individual machines we have, with what felt like very little processing power, once we're forced to adapt away from overvirtualized, shallow quick-to-market apps.
EDIT: on further consideration, I'm wondering if it's not possible within things like electron, to have some kind of "just in time" style hybrid infrastructure where the most basic features are augmented to run native code quickly for the most heavily used parts on the most common platforms? I don't know enough to say.
I agree Electron is a great way to serve tons of end users. But lets not exaggerate, there are 4 main systems to target: iOS, Android, Windows and Mac. Its not that insane to have 4 well optimized applications, instead of 1 mediocre-fits-all approach.
The problem is more recruitment and internal mobility I guess. If you have 4 teams, experts in their domains, you will have 4 teams 'fighting' each other over the UI, working in their silo's, harder to maintain feature parity... But thats organisational.
> I’m so #%^*ing tired of it taking three Mississippis to show a channel I click on.
I can guarantee you whatever is causing Slack to take 3 seconds to load a channel has nothing to do with it being built on Electron and, all else being equal, would probably have the same problem if it was built as a native app.
> I’m very critical when a company like Slack cannot find the time and resources to make native clients.
I think they could, but they've decided it wouldn't be a good use of that time and those resources. The Electron app was probably written when Slack was a much smaller company, with less time and fewer resources. Now that they have more time and resources, they can choose to spend that on adding features and acquiring more users, or they can spend it on rewriting their entire app three times (ok, two times; I assume if they decided to go native they wouldn't spend the time on a Linux app). And remember this entire app isn't the app they started with; it has a lot more features than that one did.
It frustrates me too; with the exception of Signal Desktop, I've banished Electron-based apps from my machine (if I had more time on my hands, I would write -- and then spend a ton of tedious, thankless time maintaining and updating -- a native Signal Linux app). I use the Slack webapp in my browser, and it works just fine. And that's also the weird thing: I don't get why so many people use the Electron-based desktop apps, when the webapp has all the same features.
But on the whole, I don’t think the armchair critics truly appreciate how Electron reduces the cost by at least an order of magnitude.
Compared to what? Java? Have you used c#.net, or for that matter Delphi or VB6 in their heydays?
To me, I look at electron and I would put it roughly on par with QT. Although, like the vb vs delphi argument, i think electron gives you a good initial boostrap and makes you feel productive, but then when you start to hit problems they are very difficult to solve. Vs QT which is harder to get started in, but once you have the core up an running its possible to bolt things on fairly rapidly.
I've not written anything in lazurus other than toys, but many of the problems I had with it just a couple years ago seems to be mostly fixed. AKA, its been quietly getting more robust everytime I install a new version..
So, I would be interested in a fair electron vs lazurus comparison from someone who has spent significant time in both.
There's still a huge advantage for Electron - it's still fundamentally the same tech underneath - HTML, CSS, JS. Any semi-competent web developer can use Electron and ship cross-platform apps with it, and heck, lots of code can be shared between Electron apps and the web "app". None of the other options provide these facilities.
But, you then have all the problems with JS/HTML/CSS that web developers have. Like for example being able to support scroll bars and the like in a reasonable way. We accept certain poor behaviors from web apps because they are bound by a technology stack that is in many way sub optimal for the needs of the user.
I was employed doing what we called client server development in the 1990's, where we wrote the frontends the users interacted with in vb and later delphi. I would put the visual and operational complexity of some of those applications far beyond any slack/etc type application (one actually had a chat function built in). They were also distributed, but instead used proprietary protocols over proprietary wireless networks (until later when we were some of the first customers for cellular data). And I say it not to brag, but there were two of us working on the front end, both part time over the span of maybe 5 years. And we could do large logs/chat histories where the scrollbar reflected where in the data set the user was viewing and they could smoothly scroll or jump to any arbitrary location. I can't remember the last web application that could do something even that basic. Slack definitely cant. I'm pretty sure there are more developers working more hours on the slack front-end than we worked on those applications. So, I'm not sure where the claims of developer efficiency comes from.
People make a lot of claims about developer efficiency, but just about none of them are backed up by more than their gut instincts, which I would claim from the end user/3rd party viewpoint seem to be completely invalid. The apps are overwhelmingly poor on any number of fronts, and apparently have a lot of man hours poured into them.
It's not that I don't appreciate it. It's that I _don't care_. I've only got 16gb of ram, and much of that is used by things I actually care about, like airsonic, a Linux VM, fb2k, etc (yes, my PC is my home sever). Even if I were to dump all this, there's a limited number of electron apps I'm willing to use, and none of them are things I leave open all the time. Authy particularly annoys me, since I have no choice if I wish to use their authenticator (humble bundle doesn't support totp, only Authy 2fa)
Last time I looked at Lazarus its autocomplete/suggestion was not on par that I've become to expect from VSC/Intellij and modern languages. Also, who wants to write things in FP nowadays? I was toying with the idea, but I don't want to end up abandoning my project midway just because FP happens to miss an important library or db driver or something else.
The pascal thing has its own advantages/disadvantages. Finding people with pascal experience is going to be difficult, but i'm not sure this is really any worse than some of the web/framework wars. Pick a framework and likely in 5 years your going to have problems finding people who are experts in it because they have all moved to something else (although maybe that is settling down angular/react are still around). OTOH, Pascal is a far simpler language than JS, for sure it has fewer gochas than JS does at this point, and it supports proper threading if you happen to need that among other things. Its definitely stagnated, but that's probably a good thing if you plan on building a long term business around it. I'm not sure I would consider editor features like auto completion very high on my technology selection, particularly since you can 1. fix it, 2. use a different editor.
I would worry about hidpi support with Lazarus, and how well the android/ios/mac bindings actually work. Maybe a few other things too, basically things that are going to be so large a work item that fixing them is going to outweigh any advantages of picking the tool. I think electron has at least a dozen of these, and that would push me away from it. From where I stand, electron is closer to java in the long term results.
Autocomplete was one example, but I wouldn't underestimate the importance of tooling when you want to onboard people new to the language. Does fp even have package/dependency manager? How good are linters?
I wrote my first pieces of shitty Turbo C code before autocomplete was a thing, in a DOS editor. There was no discovery of language syntax or features, you had to have a paper manual open on your desk. So I got frustrated, didn't stick around (dumb decision in retrospective), turned to webdev, at least the turnaround from code to result was quicker. For some time I thought it couldn't get any better than syntax highlighting, but it did. And now I just don't want to work with languages that don't have good lsp support. Been there, done that, nope, not going back.
Sigh... "Does [Product X] even have [solution for Product Y problem]?"
A "package/dependency manager" is only necessary in languages based on using umpteen million unvetted modules downloaded from the net. Not all languages work like JavaScript in that respect.
I'm actually surprised either Microsoft, Slack or a new startup in the space haven't made a native app yet. Wouldn't even need to be all of them; Teams could be a native Windows app and keep the electron one for mac/linux/mobile. Or Slack could make a native Mac one first.
Seems quite mad considering the use/value of the tools and the absolutely enormous user base they have.
Considering that in Mac you have universal app (so in theory your iOS app can easily become a Mac app), but they don't want, even they opt-out from the universal app program so you can't install the mobile app in the desktop.
But they didn't migrate because isn't critical for their business, Teams is just a bonus, but Outlook is 100% native app in Mac and works great, because they can't afford to lose customers with an electron version of Outlook.
Because they already have a codebase that uses HTML for UI, and a team that's proficient in it?
It's also worth remembering that HTML UI is technically "native" on Windows since Win8, where HTML/JS was one of the three stacks supported by all the new WinRT stuff - and, indeed, the one most heavily pushed.
IMO OS vendors could create an electron detection mechanism and start making electron instances use shared libraries and memory for their browser instances as part of a swap in the kernel. When you remove the "virtual OS container" that come with every electron app (their browser instance), you remove a significant amount of resource waste they have. Then they're pretty much QT apps written in python in the level of performance and memory they use.
It's kind of a shame that chrome apps didn't become a thing and electron app installers would detect if you have chrome and just install themselves as chrome apps vs an electron app. That would've been another legit alternatives.
If the OS doesn't get updated, then some of those electron apps might stop working properly as they are expecting to target a particular version of Chromium.
It would be a bit smarter and generalized than that, it would detect new chromium 'version bases' and differentiate based on that. Eventually it will stop working, but if you don't update your OS after a certain point, new app versions wouldn't work anyway, or they would fall back into standard mode.
The whole “newness” idea seems odd anyway considering electron must be coming on ten years old now.
> The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper".
This is spot on. I’ve worked on a big electron project before for a massive firm and a lot of work was done before picking that direction. Proof of concept was done for a couple of alternatives but electron ended up better on balance.
I mean, this isn't the first time in history that quality and performance got thrown under the bus for the sake of cost. When developer salaries are your highest cost, and you have to make more and more money, eventually everything will get (and has gotten) sacrificed at the altar of "developer cost".
Historically, commercial software development has never been the place for perfectionist "software craftsmanship". If you want to make the world's fastest performing, native, bug-free, warning-and-lint-free, most beautiful and elegant software project, you're going to have your lunch eaten by the competitor who throws together a barely-working, bloated JavaScript mess in 1/10 the time. The people who take pride and "sand the back of the cabinet" go out of business. At the end of the day, customers (broad generalization) don't seem to care about performance or quality. So, purely by survival of the fittest, these frameworks are going to take over every software project they possibly can. Sad if you care about native/performance, but inevitable as the tide.
It's not (just) about money or profits. As a solo web dev, building for desktop with electron is faster. It saves me time, the most valuable resource. How I choose to spend that extra time - on more features or on vacation - is up to me. I don't want to spend it on learning yet another way (or several) to build UIs and apps, when I already know enough to make a good enough product for my goals. It would be great if Electron was less resource intensive, yes. There's Tauri and other alternatives that will eventually replace it and solve the performance problem.
You can blame OS vendors for trying to build a moat with OS-specific, not web-compatible UI toolkits. Don't want to touch that, don't have to, thankfully.
You are saving your own time and resources, in exchange for those of every one of your users. That insanely selfish attitude is why we are so incensed by this crap --- especially when it's developers making the lives of other developers worse.
Have some respect for your users; they might be developers too.
You're presuming that these users would have a product, even if it took several times longer for the developers to build. That isn't always (or maybe even often) true.
There are tradeoffs to be made; being absolutist about it isn't helpful.
> I think there's already more than enough quantity
That's just not true though. People need so much software, I've written more than one mediocre electron app for niche use cases that saved users multiple hours a week of mind-numbing work. That software that they're so happy to have in their lives would still not exist if it weren't for electron.
Almost none of those users are willing to pay 4x the price for native iOS, Android, Windows, Macos, and Linux apps (on top of the web version).
Also, I do not, in fact, have 4x the time to implement all that, vacation or not, nor do I have enough revenue to justify hiring a team of specialized developers.
It's about as selfish of me to be building on Electron as it is of you to not be working full time on Tauri or some other secure and high performance solution to this problem. Which is - not at all. Nobody is entitled to our dev time for free or against our will.
Realistically, the alternative to an Electron app is not native apps for multiple platforms. It is either a web app or a native app for a single platform.
Fully agree. I have a background in C/C++ and a bunch of other languages, I'm probably in a better position to write native apps, and am no Electron fan, but I never got into native GUI's for a reason. Certainly if they have to be portable, it's an absolute mess. I already did throw together basic VueJS web apps for management interfaces for another projects, and now I need a similar interface, but with some OS-level access. My choice right now is not between a native app or an electron (or something similar) app, it's between having an actual viable project/product or having nothing at all.
I can't afford to waste time and money on native apps for multiple platforms. And I know that once my thing would bring in money, I could easily find someone not too expensive to take over the work on that UI.
Native apps cost a LOT more money, not to mention organisational overhead to bring feature parity to all. Bringing something native to 4 platforms is nowhere near an x4 cost, optimistically it would be more like x8.
I thought it was that it was the browser as part of the OS which was the issue from being priveledged in performance (regardless of security concerns).
I conjecture that if the alt-universe Microsoft could have dodged that bullet via different code structuring. If there was a "HTMLReader.dll" available which handled the HTML page display rendering If Internet Explorer, its help file reader, and any other implementer of the API could use it there would be no antitrust issue because it doesn't technically priveledge their own applications. Not a lawyer but it would be interesting to know if I am right or where I went wrong.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
I learn plenty of technologies - those that I want or need. I neither want nor need to learn any desktop GUI frameworks for my purposes, so I don't. Certainly not going to do anything just because some troll told me to.
If you want to make the world's fastest performing, native, bug-free, warning-and-lint-free, most beautiful and elegant software project, you're going to have your lunch eaten by the competitor who throws together a barely-working, bloated JavaScript mess in 1/10 the time.
On the other hand, things like kdb+ and the like show that there is a market for excellence. It's unfortunately rare, but it's there.
There are multiple platforms and nobody wants to write a separate app for each one. So now you need a framework to make apps multi-platform.
Microsoft has resources but no incentive to do this. They have the most market share. Before Electron existed, people would just write native apps for Windows and nothing else, locking people into Microsoft.
Apple has resources, they have the right incentive because they don't have dominant market share on desktop, but they're obsessed with native look and feel. Cross platform frameworks are antithetical to that, so they don't do it.
That leaves the Linux people. So we get Qt. Finally something that works. Except that this is the opposite of what the Linux people are good at. It's not "do one thing do it well," it's reimplementing the equivalent of an entire operating system's APIs on at least three different operating systems. That's a lot of work, it's hard to do right and easy to get wrong, and they don't have a mountain of cash behind them. So it works but it's full of sharp edges.
Enter Electron. Browsers had to deal with the same multi-platform nonsense but Google does have a mountain of cash so it's easier to use. Winner, efficiency be damned.
What might have worked if anybody there thought of it would be for Apple to port their native APIs to Windows (because it has the most market share) and Linux (because it's a fellow Unix-like so that should be pretty easy). Then you could easily compile native macOS applications for other platforms but they would still be native on macOS. That solves their problem because now everybody is going to target their native API, and if they run less well on Windows, that's fine as long as they run at least as well as Electron does. Big win for them because now that becomes the target and they're the only ones with true native applications.
From my experience, these Microsoft frameworks never really get picked up by larger parts of the community.
I feel like Microsoft's approach is to create a framework for everything and hope it sticks - though they appear to repeatedly fail. Nobody really used Xamarin. Nobody is using Blazor.
This very minimal documentation of Maui doesn't really scream confidence in their own product, either. I predict this one will die as well.
This so much. Google gets a lot of critique for killing of its products, but Microsoft does the same in the developer community. Except they don't even have the decency to properly announce it when they consider something abandonware
Such web solutions are cheaper because the software market is really sick. There is almost no point in competing on quality and performance for consumer software, because some start-up with hundreds of millions of VC funding or some SW megacorp will enter the market and give their mediocre product away for free while paying engineers N times more than what you can offer. Bonus points if the software is at least partly open source or integrates with some standard (which will get nixed once they have enough market share).
And those VCs and that CEO won't be particularly concerned with technical excellence, they'll focus on their metrics regarding user acquisitions, market share, etc.
Since technical excellence is of at best secondary importance, it makes sense to commoditise the technology and make it as easy as possible to build such applications. Build some framework, give it away for free, ideally open source. Brag about how you're enriching the world through your open source contributions. Optionally snicker at the fact that you're in fact controlling how things get built on the so-called open web.
Relax on your yacht while drinking a good champagne. Don't explain to the poor sods still worrying their brains about technology why "the economics" prevent building quality software, because you don't give a fuck anyway. One of your employees or the employee of some other web company will surely set aside some time to clarify this, like the good little employee that they are.
I think the underlying explanation for this is that the world sucks. Most of the companies that are commonly thought of as tech companies aren't.
My definition of a tech company is a company that lives and dies by the quality of technology they make. For example Nvidia is a tech company. If someone made GPUs that are half the price, but twice as fast, Nvidia would be out of business in a jiffy.
Spotify isn't a tech company - if someone made a better music player, they wouldn't be able to disrupt Spotify, unless they somehow managed to make a deal with every record label under the sun.
In the 2000s, I had a 20GB Music folder full of music I liked, and I played them using Winamp, which supported searching my library as fast as I could type. It didn't have all of Spotify's features, but the it did the core value proposition of playing music better than Spotify does today imo.
The problem is that Spotify is a package deal - if you like the service, then you must use the client - and they are under no competitive pressure to make a better one.
The 'it's much cheaper' argument is ridiculous - it's not that hard to create a native app that plays music - I doubt the people who wrote Winamp commanded the salaries FAANG people writing Electron apps do today.
'Tech' companies make much more money today than when they were making when they were writing native stuff.
> hiring a few JS bootcampers to build one react UI that works on every platform is extremely cheap
But this is the point: Technological development could go in one of two directions: Allow skilled people to build increasingly bigger and better things, or allow increasingly low-skilled people to displace the skilled people by creating things that are worse but still vaguely fit-for-purpose. -- The industry is going the latter path, which is highly offensive to those skilled people.
The thing with Electron et al is that the GUI dev tools on it are the best thing humanity's come up with, by a country mile. It's vastly, vastly easier to develop a really good, clean app, with a great UI in it — especially a UI that needs lots of unique, new components that aren't in an OS-level GUI api's standard set. You compare really well-built electron apps (VSCode, Discord) with their competition, and it's such a staggering, one-sided comparison in "fit and finish", attention to detail, and not having weird little bugs that never get fixed. I say this as a former OS partisan, but even Apple, previously the trophy-holder for "best gui-builder tools", is way behind the curve. Steam, too, is now an electron app.
The thing that's just the brutal nail in the coffin is — if it just had the attributes mentioned above, it'd be a huge win. But to have "nearly perfect cross-platform support out of the box with no effort?" Jesus.
The thing about electron is — some 5 years from now, we're likely to start shipping electron apps that run on WebASM rather than JS, and at that point, there won't be any valid counterargument. They'll have all the benefits, and run just as fast and lean. Game over.
I'd argue that Steam isn't 'now' an electron app but was the OG 'electron' app. The store and community interface have always been web UI from day one.
Originally it wrapped the IE/trident engine and then transitioned to chromium.
> The thing with Electron et al is that the GUI dev tools on it are the best thing humanity's come up with, by a country mile. [...] even Apple, previously the trophy-holder for "best gui-builder tools
You've never used Delphi, FP/Lazarus, or even Visual Basic, have you?
A bit of a digression, but iMessages is supposed to be a native app and it sucks ass on macOS. buggy, slow to load previous messages, and have to force quite from time to time.
From my experience, many native apps suck. But a lot of them are good. Same goes for electron apps.
Also, native app development process is way worse than solutions like electron and react native in terms of feedback loop and DX. Ofc they have downsides as well since they're abstraction layers.
> The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper".
I'd argue it's not cheaper overall, just for the company choosing to do the development.
What I mean is, the lost productivity of waiting around for slack to load a channel is essentially outsourcing this cost difference via poor performance.
And yet the market (the users) choose to use Slack over alternatives. If this lost productivity was a problem for the general market, Slack wouldn't have the market share it has.
From what I can see Slack's market share is dwarfed by Teams'[0][1] and similar to Discord's[2]. This is not strictly an argument against your broader point as I believe all these apps are essentially the same in technology stack.
Regardless, it's hard to look at that one feature and determine that conclusion from it. There's a of other pros and cons to be weighed when making any kind of decision like this. I'd suggest hosting, overall UX, and corporate support probably make up far more of the critical success factors for these kinds of apps than their older competitors. The lost productivity and poor performance are the cost of these other features, apparently. Perhaps in the future there will be a new disruptor to the market that will do this same thing but with a fully-native client application that is stabler and more performant than their competitors.
I personally consider IRC (or I guess more precisely, IRC clients like Weechat (Not to be confused with China's WeChat)) a contender, as it provides chat functionality as well as the ability to create channels for themed discussions.
No, it doesn't have emojis or inline multimedia, but IMO it delivers the important part of chat and leaves the unimportant parts for other applications (like opening a hyperlink from IRC chat to see the clever meme someone posted). It also doesn't do video or voice, but there are other applications for that as well. For purely text chat, IRC is a lightweight (and self-hostable FOSS) solution.
>What I mean is, the lost productivity of waiting around for slack to load a channel is essentially outsourcing this cost difference via poor performance.
What makes you think that Slack's loading times is due to Electron, not shitty backend?
If it were the backend, I would not expect the load times to improve when loading Slack into a browser tab where its resources are constrained.
Separately, if it were strictly a Slack problem and not Electron, you wouldn't expect to see these kinds of issues crop up across the spectrum of Electron apps. Scroll through this thread and you can see a variety of people running into these problems in all kinds of different electron-based apps:
> The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper".
Expecations have changed.
It is 2004, your company needs a simple CRUD app so your employees can work with some structured data in a DB somewhere, let's say so sales can check inventory levels in an existing database.
A single developer can start up a Winforms project and throw something together in a matter of days to weeks. It can only be used by people on desktops running Windows while they are at the office, but that is fine because that is the world of 2004.
2022, your sales team needs to access inventory levels in a database. Assuming you haven't already been sold some multi-million dollar solution to do this (and you likely have, multiple times, and at least some of the implementations have failed), you now have the following requirements:
1. Some of your employees use Apple laptops, some use Windows.
2. People want to be able to access the data on their phones as well, which adds another 2 OSes to the mix.
3. Restricting the app to being onsite in the office doesn't cut it.
So, you can write 4 (!!) native apps. The ugly WinForms one is still simple, but you can either adopt a proprietary "simple app building" solution for the other 3 OSes (or just use it for all 4 OSes), and have serious issues finding devs who know the niche tech you've picked, or you can make a website.
Now, native apps on smart phones are a constant maintenance headache. Major OS releases break things on mobile all the time, and entire APIs get deprecated. (Another reason why writing against Windows is good, that Winforms app from 2004 probably runs just fine with 0 changes in 2022) It is easier to maintain 1 website than 2 mobile apps and 2 desktop apps.
So, website it is.
What framework are you going to use? Whatever one is stable, easy to hire for, and has the best tooling. OK so I wouldn't call React "stable", hopefully it has undergone its last major redesign for awhile (LOL), but there is tons of tooling for it. Of course the docs suck compared to what Microsoft had in 2004, because it turns out part of that $500 per seat license for Visual Studio went towards amazing documentation and example code. And because Web you'll get breaking changes now and then, but it still is easier than maintaining 4 native apps.
People underestimate how simple Windows everywhere made life for developers. Amazing documentation, stupid good tooling, and an obscenely stable platform to develop against. A major OS version came out what, once every 3 or 4 years? And unless you were writing drivers, that major OS version wasn't going to break anything.
>A single developer can start up a Winforms project and throw something together in a matter of days to weeks. It can only be used by people on desktops running Windows while they are at the office, but that is fine because that is the world of 2004.
...
>People underestimate how simple Windows everywhere made life for developers. Amazing documentation, stupid good tooling, and an obscenely stable platform to develop against. A major OS version came out what, once every 3 or 4 years? And unless you were writing drivers, that major OS version wasn't going to break anything.
I was doing this exact work during this time and things weren't that easy back then either. Does anyone remember DLL Hell[1]? Even if everyone was running Windows on an corporate machine, it was still more difficult to deploy new code than it is today. Being able to deploy the same code to a wide variety of different machines is a huge cost saver today.
I was doing that exact work back then, as well, and DLL hell was mostly gone by then. If you had a dependency, you simply packaged it alongside your main binary, and it wouldn't affect any other app on the system. DLL hell only arose when apps would try to install their dependencies globally (i.e. into \WINDOWS or \WINDOWS\SYSTEM32), but it was already rather uncommon by early 2000s, except for the most foundational runtimes like the C++ one.
.NET pretty much solved the problem even for globally deployed dependencies by imposing physical versioning, and added file signatures for good measure, so that your dependency would literally be on that exact copy of the DLL.
I had direct experience with some apps in VB6, some in Delphi, some in .NET/WinForms, and even one in C++/wxWindows. None of them had any DLL hell issues. Not as in I dealt with them, but as in they simply never arose in the first place.
> Even if everyone was running Windows on an corporate machine, it was still more difficult to deploy new code than it is today
Is that true? As someone who was also doing native Windows desktop applications back then (C# and VB in .net from 2006-2010), ClickOnce handled everything pretty reliably for Windows XP/Vista/Server (even with weird DLL requirements -- for example, I had to ship a QuickBooks Desktop API wrapper and propritary Epson drivers in our .net desktop app)
Yes, it's not quite as reliable as "go to my-app.com". But it was way more reliable than Electron's current self-updating (.net ClickOnce loaded in a tiny fraction of the time that Slack and Discord's Electron apps take to check for updates on every launch)
Well that Wikipedia page exists so I wasn't the only one running into the issue. I will note that the time period you listed is a few years after this problem peaked. These issues were a lot more pronounced in the VB Classic era and were greatly improved in the first few releases of the .Net Framework. Your timeframe is just after that.
I cut my teeth during that era and the level of complexity asked for from those apps was minor compared to what people will demand today. Your native app will also require every machine to install a bunch of extra libraries before they can even run your little CRUD app.
At some point you maybe wrote a cute little auto-update function that let you update the application automatically... until IT slapped your hand and told you never to do that ever again.
You could write a similar web app in any modern framework in just a few days as well, and have it work on any device with a browser. It updates automatically and IT doesn't give a shit. Yay!
I have a pet theory that one of the big things that drove SaaS adoption was that people didn't need to deal with IT gatekeeping. You could get the latest software RIGHT AWAY! As soon as the devs finished it! Incredible!
Wow! I remember working for a company having to deal with your 2022 scenario way back in 1997! Only back then we didn't have npm, react, or any other of a dozen Javascript frameworks I've forgotten the name of.
Oh, and we did it without Visual Studio as well. Kids today.
> So, you can write 4 (!!) native apps. The ugly WinForms one is still simple, but you can either adopt a proprietary "simple app building" solution for the other 3 OSes (or just use it for all 4 OSes), and have serious issues finding devs who know the niche tech you've picked, or you can make a website.
Or you can adopt a free software / open source simple app building solution for all four OSes (or, hm, at least three of them -- not quite sure about iOS) that compiles natively on these and many other OSes. Sure, the UI would preferably be designed differently for desktop vs small-screen mobile, but you have that problem with Electron too, don't you?
> OK so I wouldn't call React "stable", hopefully it has undergone its last major redesign for awhile (LOL), but there is tons of tooling for it.
I'm not sure what you're talking about. The last major feature addition to React was Hooks in 2018. But that didn't break stability at all - you can still use all of the old React APIs and ignore that Hooks exist entirely. The React team has actually been incredibly thoughtful about introducing new APIs, rarely break backwards compatibility, and it's a library I generally trust _not_ to break anything on updates.
If you work somewhere that insists on flavor of the month development using the shiniest libraries and the newest features, yeah, there's a lot of churn, but that's a very different complaint from "I wouldn't call React stable".
> The last major feature addition to React was Hooks in 2018. But that didn't break stability at all - you can still use all of the old React APIs and ignore that Hooks exist entirely.
Yes, except that hooks completely changed how React is written, and most libraries and code samples now days use hooks. You basically had to learn a brand new way of writing React. I learned react pre-2018 (tens of thousands of LOC written) and I basically "don't know React" now.
Then there is the ecosystem churn. I got tired of my routers breaking, all the bloody time. In 2 years I had to rewrite the routing for my code twice, each time took multiple days of frustration[1].
There has been the rise and fall of Redux, which again, massive churn in the ecosystem.
[1] In defense of React I was using React Native Web which, at least at the time, was a pile of many small disasters combined into a single code base. Also it wasn't as well documented as I'd like, and eventually I had to scrap it and move to regular React, which solved many of my issues.
> And unless you were writing drivers, that major OS version wasn't going to break anything.
Just a minor nitpick, but the NT device driver API has in fact stayed largely the same since Windows 2000 and in comparison to the various technologies that came and went in userland.
And then MS got pissed at Creative for having drivers so bad that crashes from Creative sound card drivers accounted for a non-trivial % of all Windows crashes. The result was MS changing the entire Windows audio stack and eliminating Creatives most lucrative product line.
I don't have much to add, except that WinForms was an absolute dream to work with. I was literally illiterate in any kind of programming and I was able to build stupid little apps that actually did things.
I remember writing one utility that would turn off the monitor with a global key shortcut. It had a little window with just one button, and would minimize to tray.
I was able to build it just by downloading the free version of Visual Studio and reading the docs. The extent of my programming knowledge at that time was doing some silly LOGO stuff in school.
And internal websites are a direct competitor to Winforms.
Every year someone telling me Xamarine forms is better now, but every year someone tells me that they tried Xamarine forms the year prior and it still wasn't any good. My one experience with Xamarine forms was watching 3 apps teams (WinPhone, Android, and iOS) all struggle to write an app. Given that it took 3 teams, and it was a large effort, I wasn't impressed, but this was 6 or so years ago. I hope things have improved since then!
Fun fact, for awhile Silverlight was huge in the internal website world, and Microsoft could have easily had another decade of lock-in going on, but then MS killed Silverlight due to internal politics, and even the most loyal MS customers saw the writing on the wall and redirected their internal development efforts elsewhere.
90s Microsoft would never have abandoned a technology stack that had wide spread internal corporate adoption. :/
> The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper".
Cheaper sounds like something that is always the right thing to aim at, but it also constrains the product and the business around it to a model that has shrinking margins. That might be the best bet to make. It rarely results in durable businesses. Unless there's some other sort of moat in play.
Comment fails to account for user perspective. As a user I do not want Electron nor React. It does not solve a problem for me, it creates a new one.^1 I would rather someone like UNIXSheikh make the "engineering" decisions.
1. I avoid both Electron and React. I like UNIX. Runs on many platforms.
I definitely empathise with the author. The web space is particularly prone to fashion driven development (FDD). Despite that I think he's a bit off the mark. Most consumer facing development isnt and has never been "engineering" outside of safety critical applications. It's more akin to domestic plumbing. You're connecting existing components together and while there's a skill to it, the skill ceiling isn't that high. Programming however (as opposed to development), even where software engineering does take place, is a craft. It's something that you learn by doing and it takes time to learn to do it well.
Modern frameworks and software development techniques are often the equivalent to handing a complete amateur the keys to a fully equipped wood working shop, complete with laser cutters, CNC routers and a load of power tools and letting them have at it with only a few YouTube videos for guidance. Sure they'll be able to make things, and larger and more impressive things than if they could only use hand tools, but the resulting creations will be over engineered, or rickety, or just plain badly designed compared to furniture created by someone who's spent a few years apprenticed to a master craftsman.
And you know what, that's fine. 99% of web development is still teenagers desperate to be the next Elon Musk hawking Tinder for marmosets to VCs, WordPress sites and e-commerce. It doesn't matter if it's a bit shit and is developed by people who learned their craft on Udemy and by watching 10 ways to turn yourself into a l33t programmer videos. The concerning thing is when that philosophy starts to infect areas where actual safety critical software engineering is done. No one wants a l33t programmer with massive scrum skillz designing the control system for a nuclear reactor...
to be fair though, isn't "it's cheaper" just the same thing the article laments from the consumer side? It just extends the argument from "devs are cheap" to "users are cheap as well"
the overriding concern seems to be that if we continue to stack crappy tech on top of crappy tech and accept it as consumers for twenty years we've built the tower of Babel (some might argue we're there already) and that this is terrible long term engineering.
If we treated semiconductors or aerospace engineering the same way we wouldn't have any chips to run electron software on and the planes would keep falling out of the sky.
I mean, you named the precious few that don't suck, and two of them do essentially the same thing, and the other is a music player with a very questionable business model.
I'm with the op. I'm deeply unimpressed with the last decade or so of software development; and that's not just blowing smoke -- this is day to day practice. With the exception of Discord, I deliberately take pains to use old software because it's better.
Moreover, I find when I show non-tech folk "how I do things with old software," most want to learn more, and nearly none are satisfied with what they are "given."
> perhaps with the only exception that now a 2 year old baby can make something shiny that you can click on with your mouse.
True, and the author highlights the benefit himself.
Maybe there is an underlying truth here as the number of users isn't indicative of quality and they might be successful if implemented differently.
Apps like Teams are a severe resource hog, software is getting slower even if machines improve. I use electron myself for visualization, but I know it is not the optimal choice.
and thats exactly his problem. Its about dumbifying the development cycle, with often terrible results for the end-user. Its a race to the bottom about 'how low can we go' and 'what will our users tolerate'.
Not all Electron apps are bad, but thats because they are still doing a ton of work to actually make it a little bit smoother. But this is in spite of Electron, not thanks to Electron.
To be fair, Discord is only as popular as it is because Microsoft was insistent upon alienating its Skype user base to the maximum extent of its ability.
> Hiring experienced desktop application devs to build a quality native app for each platform is going to be expensive,
So it looks like software would vastly improve if silicon valley companies moved to india or russia or some other place with abundant supply of capable developers.
Regardless, what you said doesnt show that it was a strawman, it was still a valid conclusion, but you re offering the real reason behind it.
Depends what you're writing. People aren't going to write performance-sensitive apps in Electron, like FPS games where customers demand 4K 120 FPS with full graphics enabled. But for Image burners (balena), Drone software (Betaflight), and Sophisticated code editors (VSCode), it's hardly a concern.
Yeah, it's worth pointing out that people are blaming electron for what's wrong with Slack. When, perhaps, it's just Slack itself that sucks?
I use Discord all the time, which is one hell of an oranges-to-oranges comparison (with Slack), and not only does it have none of the perf problems of Slack, it's also faster and snappier than any NATIVE chat app I've used.
There's a lot that can be done wrong in any framework; as recently as windows xp, there were a few atrocious algorithmic-complexity gaffes in basic file viewing (directories with more than a few hundred files would slow to a crawl in the GUI, taking multiple minutes to display). It wasn't the fault of the framework it was written in; it's just that someone at MS had written something that had to scan the directory in a hurry, and ended up accidentally writing an exponential-time algorithm. Slack likely has a few deep-seated goofs in their architecture that are just too big to extract and fix.
If nearly every single professionally written and quality vetted code created on JS is so slow that it takes several seconds to do things that software used to do instantly 60 years ago, is this really not a problem with the language?
Yes, JS can be much faster than every single piece of it you will find people using. It's just that every developer that has some main focus different from execution performance is holding it wrong.
Totally agree. And I'd argue it's not even a new trend. Consider Infocom's Z-machine. What was that if not the Electron of its day? And the right decision from both a technical and business perspective.
What is it about building "web-related technology" that makes it so cheap? Is it actually that much cheaper? And if so, why? Why isn't there a platform for making apps that compares?
> What is it about building "web-related technology" that makes it so cheap?
Mainly a huge existing (and ongoing) investment from Google.
> Why isn't there a platform for making apps that compares?
Not enough money in it for a company that would make something properly polished, and the open-source stuff works well enough for its own developers and gets bikeshedded to death whenever anyone tries to add good defaults and provide an easy onramp (if anyone even cares enough to try). Like, in theory you could glue together something with Python, Qt, PySide and Freeze where someone can write two lines of Python, push a button, and get GUI executables for all major platforms. But good luck getting anyone to accept your patches or commit to not breaking compatibility with what you're doing.
A huge pipeline of labor who understands it and mature tooling that creates a much lower skill floor for labor to be productive. Bootcamps, tutorials, freelancers, contractors, there's a _lot_ of resources in all sorts of languages about web development. Many fewer resources on trading packets over TCP, or making a desktop GUI, and even fewer on cross-platform GUIs.
I feel comfortable with asserting that the author has never worked on a complex web application. This web-dev hate borders on pathological: this is a guy that seems to specialize in Unix/Linux tooling, doesn't have a single tutorial on web app development [1], and yet has an intense hatred towards a space that he barely understands.
These types of arguments are incredibly embarrassing to read, not only because they betray a stunning lack of familiarity with the topic, but it's basically a conspiracy theory: everybody is stupid except me, all these successful companies making billions of dollars and attracting top talent are just throwing money into a fire for fun.
I have been in professional web development since 2004 and I mostly agree with the author that there are massive amounts of groupthink going on. "Modern" web development has standardized in tool stacks which are insanely complicated, far beyond anything that is warranted in most cases. We have forgotten how to make simple things in simple ways.
At a minimum you need node, npm, webpack, babel, an spa framework, a frontend router, a css transpiler, a css framework, a test runner, a testing functions library, and a bunch of smaller things, and that's just what is "needed" to build a static website with a bit of interaction. We're not even talking about the dockerized insanity that happens as soon as you want to slide an API under that beast.
I understand why every piece is there, I was there when they arrived on the scene, I understand what problem they solve. What I don't understand is why as a group web developers have decided this is the only way to solve the problem of web development. What we don't have are simpler web stacks. Why do we need npm or babel at all to make a simple web frontend? Modern browsers are good enough that with the right tooling we don't need build pipelines or package managers. Similar arguments can be made for the server-side parts.
usually docker has resulted in a lot less debugging "works on my machine" issues, because unless you have patched your kernel in a really messy way it is impossible to not reproduce issues when testing in docker.
hell, I moved a startup-ish company single handedly from uploading PHP scripts to a "development server" (VM on a scuffy proxmox box in the closet) where they didn't even know if all the code was current and constantly stepped on each other's toes to a docker-compose.yaml, a Dockerfile and an A4 cheat sheet on how to use it (including resetting the database and such) in a week or so. it's an amazing tool to ensure consistency.
Don't get me wrong. I deploy all my API's in docker containers as well, for similar reasons. But I do feel like a victim of stockholm syndrome whenever I think of docker as a solution instead of a workaround for problems self-imposed by our byzantine development tooling stacks. It is now even starting to make sense to run the entire suite of tools for local web development inside of a local docker container, because of the risk of subtle conflicts in devDependencies between machines. At what point are we going to concede this way of building software is madness? How many layers are too many layers?
> Why do we need npm or babel at all to make a simple web frontend?
See, that's a reasonable argument: many companies tend to reach for complex web tooling when simpler tools would do for the task at hand. But that's not what the author is arguing. He's convinced that none of this tooling should have ever been produced in the first place, it exists solely to stroke the egos of their creators, and anyone who uses it is stupid. That's just absurd. I've worked on complex client-based applications that would have been an order of magnitude more difficult to develop, maintain, and refactor without React.
Wouldn't you agree though that the vast majority of clients don't have that complex a need?
These tools are taught to new developers as "the way things are", yet a significant portion of them statistically must be working in lower complexity environments that may not be serves by this level of tooling.
Sure, but again, the author is explicitly not arguing that. Look:
> They keep inventing "revolutionary new ways" of doing the same thing that could be done in a dozen ways already. And they do that by coating more and more and more unnecessary complexity on top of existing technology stacks.
He's mad that the tools exist at all. He's going after the people making them, not just using them.
> At a minimum you need node, npm, webpack, babel, an spa framework, a frontend router, a css transpiler, a css framework, a test runner, a testing functions library, and a bunch of smaller things, and that's just what is "needed" to build a static website with a bit of interaction.
no you don't. just because you use react doesn't mean that everyone else also has to do the same.
I seem to notice this particularly often with a subset of developers. People who have a very strong "who moved my cheese" mindset. They are used to doing things a certain way. An approach that is simple to understand, but takes a very long time to get anything done. Ie, writing vanilla HTML without any frameworks. Besides, they aren't the ones paying for a team of developers, so they don't really see a downside in spending 1 month to do something that could have been done in a week.
And then all of a sudden, everything starts changing. Nobody wants to hire a vanilla JS/HTML developer anymore, they want react developers. And these react kiddies are able to put together a webapp that looks so much nicer, has more functionality, and they are able to do it in less time.
They decide to try learning react, but find themselves with a tough problem they haven't dealt with in a while - being outside their comfort zone. They are so used to doing things a certain way. They know exactly how to do it. They can do it blindfolded. And all of a sudden, they find themselves having to change everything up. Overnight, they go from feeling like experts to feeling like novices. And they hate it.
If they pushed through it, they would eventually gain mastery over it and learn to evaluate this new technology fairly on its merits. But they never do master this new technology. Feeling like a newbie is too psychologically painful. But they also recognize how much of a disadvantage they are at by not riding this wave that is taking over the industry. And so, rather than expressing indifference or acknowledging personal taste, they lash out with tremendous resentment and anger.
> No, I will not be polite and call it something else because it is truly sheer stupidity. In the so-called modern day it's like everyone - except a few - has dropped their brain on the floor. Has everyone except the "unix greybeards" gone mad!?
I get it. Change is scary. But it gets even scarier when you try to resist it instead of embracing it.
> And then all of a sudden, everything starts changing. Nobody wants to hire a vanilla JS/HTML developer anymore, they want react developers. And these react kiddies are able to put together a webapp that looks so much nicer, has more functionality, and they are able to do it in less time.
Don't want to support the author, but I want to say this has not been my experience. It seems to me that for most types of web pages web development is slower than in the old days. HTML and JQuery were really good for what were mostly static pages with a small amount of logic and state which is still what most web pages are.
However feature rich applications were a mess using older tools. Code organization was difficult and you had almost no hope of keeping complex state lined up in the UI, React has been a godsend for this kind of work.
But developers as a whole are very junior (iirc as of this year the median dev has 4 years of experience) and I think most can't pick which tools suit their use case.
> And these react kiddies are able to put together a webapp that looks so much nicer, has more functionality, and they are able to do it in less time.
If that were true, then great, it would represent a genuine step forward.
Most of the time, you get a webapp that looks the same, works the same, but takes longer to develop (with a larger team of specialists), fails at basic accessibility, fails at SEO, is more brittle, buggy and harder to maintain.
And not to generalize but those React "kiddies" are very much everything-looks-like-a-nail guys. Because they lack breadth of experience and training they'll shove that one tool into every project, even where it makes no sense.
Greybeards now literally have grey beards. It must be hard to have pioneered software development, and watch it change under your feet at an escalating rate. I think the measures of a good engineer in the 80s/90s was more focused around 'hacker' culture. Speed, memory footprint, 'cleverness'.
These days, a good engineer can scale, be flexible, and is a polyglot. I think the goal posts have shifted for the old schoolers, and it may be just one of those things.
I’ll just sit on the sidelines, taking hackily-written messes of software written under tight deadlines and performance improve the heck out of it and make my money being able to deliver that narrow area of value, thank you :)
Some of us “evolve” by becoming specialists.
Edit: and I’ll take backend code written by a skilled backend engineer and infra code written by a skilled dev ops person over all-of-it written by a “polyglot/front-end-dev-on-assignment” every day of the week. Just keep us away from the front end, we can’t make things pretty or interactive for garbage.
I don't necessarily disagree with your take but I think you should be making substantive counter-arguments to what the author wrote. Your post is essentially just attacking the author and of little substance in itself.
The original article isn't substantive. I mean, c'mon, the entirety of the argument against Electron apps is
> They eat up all the memory you have and still ask for more. They constantly crash and has no value over a native desktop application what so ever - well, perhaps with the only exception that now a 2 year old baby can make something shiny that you can click on with your mouse.
I'm kind of a curmudgeon about Electron apps, and I'm still reading that and going, "I can't think of a single time I've had an Electron app crash on me, what kind of genius two-year-olds are out there developing in React Native, and what kind of juvenile complaint is 'make something shiny that you can click on with your mouse' (as opposed to native desktop apps, which as we all know are matte black and only take keyboard input). Also, for Strunk & White's sake, 'whatsoever' is one word, and 'they' goes with the verb 'have', not 'has.'"
And the whole article is like this. He goes on to complain about PHP templating systems because PHP itself is a templating language, and sure, yes, argument to be made there! Just…not this argument. Most PHP templating systems compile their templates to pure PHP, so if anything is making "the application load four times as slow" it is not that. Then he goes on to tell us "all web servers has a build in router" -- there are either two or three syntax errors in that seven-word sentence, depending on whether you count "build in" for "built-in" as one or two -- and rails against how web frameworks are eschewing web servers' built-in routers (by which we mean…URL requests?) and instead insisting on implementing the revolutionary new (checks notes) front controller pattern. Yes, you're right, web development would be so much easier without that, wouldn't it? Also, while we're here, let's talk about line numbers. Why aren't we still using line numbers? Named functions are for wussies. In my day, we used GOTO both ways uphill in the snow and we liked it!
Wait, sorry. Where was I?
Anyway, it's actually pretty hard to make "substantive counter-arguments". The article is badly argued and badly written. I would not normally be quite so much of a jerk about the numerous syntax errors, except that this ends with a suggestion that people, you know, pay for this writing. I don't expect perfection (God knows I make tpyos all the time), but the closing paragraph starts with a sentence that literally has no verb. If your entire thesis is "IT people need to do better," well: engineer, heal thyself.
> I would not normally be quite so much of a jerk about the numerous syntax errors, except that this ends with a suggestion that people, you know, pay for this writing. I don't expect perfection (God knows I make tpyos all the time), but the closing paragraph starts with a sentence that literally has no verb.
Are you really bashing the author for imperfect English? The about page clearly mentions that they are not a native speaker of the language, and even apologises for any imperfections in their writing as a result thereof [1].
I strongly agree with your parent comment that attacking the author for presenting their strong views in an area is unnecessary. Make arguments against what they wrote, not who they are.
A little kindness also goes a long way. What the hell is this obsession with hammering everyone down with unwarranted negativity on this forum.
I didn't read the About page, I read this article. If I'd known that, would I have not been "quite so much of a jerk," to quote myself? Probably. But "that this ends with a suggestion that people pay for this writing" is still something of a sticking point: if I was writing in a language I wasn't fluent in, I would be very hesitant to ask people to give me money based on the strength of my writing. If you decide that such a standard is unreasonable of me and that I'm still an English imperialist or whatever, I'll take the L, I guess.
However, when you rather archly tell me
> Make arguments against what they wrote, not who they are.
I will respond: I did. The vast majority of my arguments were against what they wrote. I took a couple pot shots at how they wrote it. I don't know who they are, and certainly didn't argue against that either way.
> A little kindness also goes a long way.
Does that apply only to critics of the article and not the article itself? Because I don't think you can call the author's tone "kind" by any measure.
Was about to note the same thing. Agreed with GP on all other points, but I forgave the language mistakes after seeing their about page.
FWIW, while I don't condone it, I do think that any non-native speaker should take such criticism as a compliment. It's a sign that they're good enough to pass for native; if they were clearly struggling to write passable English, the reactions would likely be more polite with helpful pointers.
Coming back hours later, I'm going to be a bit harsher on myself: if I had checked the about page, I'd have still commented on the proofing errors, but I wouldn't have been so snarky about it, and I wish I hadn't been.
You're also right in your second paragraph, though -- the article didn't read to me like "this is someone who isn't fluent in English," it read like "this is someone who's fluent in English but tossed this article off quickly, made careless mistakes, and didn't bother to make a proofreading pass."
The big problem with his argument against PHP being a template engine is that it hasn't been developed with that in mind for quite some time. I think it's just a legacy feature to just echo out anything after a closing php tag and before a new open tag that WordPress still enjoys.
I'm of the opinion that anyone now who calls PHP a "template engine" hasn't actually looked at it since 2001.
I'd compare it more to the way you can use Ruby as its own template language. I don't think I'd really build a PHP app in 2022 the way I would have built it in 2002, but I'd be okay with using "pure PHP" templates, assuming I trusted whoever was creating the templates not to blatantly abuse it. (I know that could be a big assumption in some contexts…)
Others on here are doing that. A few counterpoints: 1. Electron is cross platform - something the author never even mentions, 2. Really, PHP is the prefect end state for web development of all scale?
The lack of even acknowledging these things is what really discredits the author in my book.
Absolutely agreed. The original poster, if more knowledgeable, should set the record straight. Attacking the article without any valid counter is just point-scoring.
I'm a webdev, I've done nothing but full-stack web development for the past 10 years, and I mostly agree with this guy. I hate Electron garbage, the slowness is infuriating. I hate SPAs and React, it's unnecessary abstraction overhead in 90% of the projects I've seen it used in. The only thing I don't agree with him about is PHP, because the benefits provided by abstracting routing and templating into a "proper" framework like Laravel are well worth the performance tradeoffs, which can be mitigated later by using cachers like Swoole and OPCache.
> all these successful companies making billions of dollars and attracting top talent are just throwing money into a fire for fun
Huge bureaucracies waste billions of dollars doing things which a single dedicated programmer could do, or small scrappy teams can do better. This is not a conspiracy theory, this is the whole reason by YCombinator exists: because startups can "disrupt" an industry.
There's a nuanced argument to be made that companies reach for complex tooling too quickly, but that's not what the author said. He's convinced that none of this tooling should exist in the first place, there is no application for them, and everyone is stupid for not just using the basics.
I'm more or less on the same page with you—and it's worth noting that the things we agree with him on are where the extra burden is placed on the client side, while the places we disagree are where the extra burden is placed on the server side.
If you can make sure your server is fast enough that the client never notices a difference, it shouldn't matter whether you're using Apache's built-in routing or Symfony's PHP routing.
Agreed. I was cringing to read this - had a laugh at the Patreon plug at the end of this poorly-written rant about something he clearly has no solid understanding of.
Yeah I went back to take a look at what else is in the blog, and his main schtick seems to be hating web development and frameworks, and not even in rational way.
I've been critical of so-called web apps for quite a bit more than a decade now. It's not the hate that was pathological, it was the obsession of taking a platform designed for displaying documents and transforming it into an application distribution platform.
And after untold man hours and billions spent, the results are still inferior. That's where the "hate" is coming from. Seeing such an incredible waste while at the same time being lectured on the greatness of the web is intolerable for anyone that has seen what a quality piece of software looks and feels like.
Hm, the way I read the article was – there are much more "simple" web apps on the Web than "complex". But, the "simple" web apps are made using the same complex technologies that are only needed by a few complex web apps.
I feel comfortable asserting you've never not used a complex web application, even for your personal non-commercial projects made for fun. The fact that his website (and other projects) is in HTML and doesn't need to be executed is a confirmation of the article ideas not some deficit.
You can justify what you do with JS for money however you want but businesses do it that way because managers decided it's cheapest way for many people to collaborate, not because it's the best way for end users, security, or longevity.
What is the purpose of his website? It is to serve static content. But obviously, his entire rant is against more complex, interactive, and stateful applications. It's strange to argue that because static sites exist, all sites should be static.
>his entire rant is against more complex, interactive, and stateful applications.
... where they aren't needed. Which is the vast majority of websites, even commercial ones. It's a rare website that actually is an application and not just a means to display text or images.
The most insane, time consuming, abstractions I’ve written have been on the front end. Esp w/ React, since exhausting abstractions is basically a requirement for any large/complex app.
I work on a massive CMS and he’s correct. There is an insane groupthink going on in webdev.
Another team in my org has spent a year reinventing our client side js so that our templates are now mixed with the svelte runtime. All to render/update cards on the page that can update when an api pings the page.
So 6 months later they’re still building the svelte integration and have only shipped 2 pages to users.
This is a cms where you should be able to create any page via a combination of SSR components but no, now it’s a mix of SSR, SSR svelte within handlebars, and client side svelte. So the edit ui is now a fragmented mess of parts of the page that can and can’t render, ruining the default model of the cms all because they couldn’t bear to write vanilla js. They also added react the year before and now it just sits there decaying.
I'm not sure if this one org making a mess is good evidence that there is groupthink occurring in the entire industry. There are places that are using these frameworks well, and it really does improve productivity.
Reducing your website to something that can essentially be served straight out of s3 buckets + lambda functions for the few interactive elements your site has (e.g. newsletter subscription, contact form) will reduce your hosting bill and attack surface, but increase operational complexity.
The key thing behind the "headless CMS" fashion is that most people severely overestimate the amount of cost reduction and underestimate the impact of something breaking down in the backend side as well as the engineering cost.
As if "devops" were a job... at most companies the sad reality is that the ops team gets fired and the developers who often enough don't even have more understanding of a Linux shell than ls/cd/cat/rm get told "you're also doing ops now, have fun!".
Inevitably, issues will plop up - hacks, data loss, site going down over the weekend because there's no 24/7 on-call any more - and then management comes down to the devs and whines "what's the cause of problem X"... and fires the rest when they say "we're developers, not SREs and server administrators".
I'm a greybeard. I've been doing this since before HTML existed, and I've kept up with the changes as they occurred along the way.
I'm not going to refute the author's points one by one. Everything being complained about exists for a reason. I agree that it's incredibly complex. It's overwhelming. That's why people specialize.
An individual can still own an entire application and infrastructure, limit their technology choices to those from the 90s, and produce a perfectly functional, modern web application indistinguishable from anything using the latest tools and frameworks. It's a whole lot of work, though.
If you take the time to learn some of the modern tools, they'll save you a lot of time. Why re-invent the wheel? But all these time-saving frameworks are complex, and learning all of them would take years, if you haven't been keeping up as they evolved. You can learn one or two of them quickly, though. You can specialize.
That's why we have DevOps teams, UI teams, API teams, database teams. That's a lot of people. But that's what it takes if you want a high-performance web application that can scale to support hundreds of millions of users, is responsive around the world, supports a variety of languages, can handle disruptions, complies with regulations, and on and on. Don't need those things? Then stick with PHP and HTML on a server you run yourself. This isn't the 90s anymore. There are a lot of people out there, and if you're building something worth using, it's a different ballgame.
I agree with your argument. The comment about blogs taking long to load leads me to believe most of these complaints are really only applicable to small scale projects. The massive stacks used for web development now are targeted at organizations that can scale into the hundreds or thousands of web developers with varying skills. In turn, when doing small scale work, most newer developers copy the massive enterprise stack, pay the price for the possibility of scaling (both requestwise and developerwise) without every really needing to.
There are far more small projects in the world than ones that need to scale to be huge, while at the same time every developer working on a small project is trying to build their resume so they can one day work on one of those big ones. So that leads the every small project using these huge stacks for no technical reason, and bogging all those users down.
And then there's one sr front end dev who started doing $10 mn ARR with a web app and react-native apps for iOS and Android from a Next.js monorepo all by his lonesome
https://www.youtube.com/watch?v=0lnbdRweJtA&t=432s
I really think the React-Native and Next.JS (and tRPC and Prisma and GQL) have hit a PHP-style tipping point where we are going to see a great new flowering of cross platform web and mobile apps (last seen in the Facebook / Wikipedia era 2007-2014 era and driven by MySQL + PHP + jQuery + Phonegap). I feel excited, the hype is finally real.
So yeah sometimes that mixed abstractions complexity works against you, I feel like every decade or so it converges into a stack you can really use to effectively GTD.
> The entry barrier to programming needs to [be] high!
I strongly disagree with this sentiment.
I think the author's view that frameworks like Electron offer "no value over a native desktop application what so ever - well, perhaps with the only exception that now a 2 year old baby can make something shiny that you can click on with your mouse." is missing the point that a "2-year-old baby" making something shiny you can click on is an amazing feat, and an example of the power of the democratization of computing technology.
I do agree that tech stacks are increasingly obtuse, and not something any one person can carry in their head. I do agree that this is problematic.
However, I really believe that the more individuals get access to a technology and have their barriers to using it removed, the better odds we have of letting someone with great ideas execute those ideas.
I’m not sure I want an app designed by a baby though. Chances are they missed something important.
I’m not saying I agree entirely, but I’ll say this… every day I’m writing embedded microcontroller code, I’m getting my ass kicked. I’ve been doing it for sometime, it doesn’t really get easier, I just am able to write code that does more with the same hardware. It’s tough work imo. … then I hop on a YouTube channel and see some messaging queue system, and the guy writes four lines in two files, does some npm magic and that’s it, he has a server running and multiple clients connecting and queuing and etc. Very cool… but…
This guy is talking about how he has no idea how it works, it just does…
The high bar might be welcome for when it “just doesn’t anymore”. Because I don’t care if a website sucks, but do care if my furnace stops or a plane falls out of the sky or a bad queuing system causes a global shipping deadlock, etc.
Maybe the bar should be low for bringing people in, and high for people that actually make things.
I agree. And the thing is, you could take out that final paragraph and I'd agree with every other word in the piece.
It should be perfectly possible to write libraries and frameworks which are easier to use without writing additional layers of abstraction. Instead of writing a new template language in PHP on top of PHP, we could write it in C (or Rust) and it shouldn't be substantially slower or more resource intensive.
This would of course require an experienced programmer who knows a lower level language, but then the barrier to entry would be lower for everyone else!
While I can get behind the idea that lowering barriers to entry is a good idea, I don't think that's the same thing as tolerating ludicrously slow garbage tools just because everyone insist on either developing like a baby or only hiring babies. As an industry, we should be deeply ashamed at how much of the world's time and resources are wasted running poor software so a company can save a few bucks.
> As an industry, we should be deeply ashamed at how much of the world's time and resources are wasted running poor software so a company can save a few bucks.
This seems like you are saying everyone should write in assembly or c language for ever task possible, since that's the only way to avoid "wasting the worlds resources running poor software". At some point the trade offs are not solely "so that a company can save a few bucks", the trade offs also are things like "why spend 1 year writing a simple server which I can write a far superior server in say python/go/etc. in a few minutes". Yes, I understand you may be more specifically talking about Electron here, but at what point are you allowed to stop focusing on saving a few KB of ram or CPU cycles and focus on actually getting things done? Feeling productive? More platforms?
> This seems like you are saying everyone should write in assembly or c language for ever task possible, since that's the only way to avoid "wasting the worlds resources running poor software".
No. Look, computers today are many orders of magnitude faster and more powerful than they once were but our software still takes measurable seconds to do simple tasks. That's not 'oh, my software could be a bit more optimal' level of slow, that's 'I built my software like a Rube Goldberg machine made of garbage' slow.
> but at what point are you allowed to stop focusing on saving a few KB of ram or CPU cycles and focus on actually getting things done?
If it were only a few KB of RAM and a few dozen CPU cycles we wouldn't even be having this conversation! We live in a world where people seem to think a chat program using a GB of memory is acceptable. That's just insane.
Barrier is perhaps the wrong metaphor here; it's more of a slope. So then the question becomes: do you make the slope less steep but longer, so that climbing it easier, but you end up at the same elevation in the end, just having spent more time? Or do you dial down the overall elevation, so that the climb is less steep and takes the same amount of time?
Both are valid approaches to democratizing technology, but the last one results in reduction of quality. And also higher profits, which is likely why it won in the end. Given the existence of the other approach, though, I think there are valid grounds for complaint here.
I agree with the sentiment. I've been in a Magento and WordPress shop where I had to implement badly written plugins that my boss payed money for. I've seen people come out of code bootcamps calling themselves developers and all they can do is implement a design spec with HTML, SCSS, and jQuery. Yes, I taught myself a few programming languages,but I put in the work and effort to make my skills worth the time to an employer. We shouldn't lower the barrier only to call whomever can get past it with the low-level skills an expert in the field.
I had a meeting today with a "cybersecurity expert" who was consulting a big company that had no idea what a hash function was. Literally no idea, as in "never heard the term before".
I explained the concept to her and she started referring to it as "encryption", I told her it wasn't encryption and she immediately started to debate me about that. Let that sink for a bit, she really thought she knew more than me about a (fundamental) concept that I just introduced to her world a few minutes before.
There's a plague of incompetence and mediocrity everywhere, and that is one cause for the issues that OP is talking about. It won't get any better soon so expect to have a lot of fun in future years.
security folks sometimes dont realize the connection between Hash functions (for data structures) and Cryptographic Hashing (aka digest) functions. Perhaps that was the confusion?
No I have definitely seen people with fancy titles around data security and now also GDPR 'specialists/consultants', who lack the most basic CS knowledge, sometimes coming from other fields like business and administration.
People will chase opportunities wherever they find them, if you have a good network and managers who are as clueless as you then you can absolutely fake it till you sort of make it (in large enough corps).
To this day i haven't met a security consultant or data protection person that had any idea what they were talking about. I'm not talking about experts, but people that were appointed and got the title "Security something".
Most of them were regular business people that just parrot some buzzwords like "Encryption" and "at least 256 bit".
I don't understand this. When I think of security people, I immediately think of articles where they describe how they made a timing attack which caused a race condition, which allowed them to overwrite the stack, and do arbitrary code execution via return-oriented-programming.
I'm reminded that these people exist and can routinely do stuff like this (or even more insane), and even though I've considered myself knowledgeable about this stuff, It makes me feel like the monkey looking at the monolith in 2001.
Their "security advice" consists of parroting back stuff they read on blog posts, LinkedIn, and places alike.
"Change your password every 3 months", "did you enable 2FA?", blah blah blah.
Add clueless managers (as other commenter said) and some nepotism to the mix and that's how they get some contracts with big names (Microsoft, Oracle, ...)
Afterwards it only gets better for them because they can advertise they were "part of the security auditing team at <big company>, reporting directly to the VP"; even though their only real useful task was to keep warm coffee at hand.
This happens continuously until one day someone in a meeting asks them about a hash function and they are absolutely clueless and the show falls down; or it could be much worse and go on until an entire community has to pay for it with life-long consequences (see Flint), or until billions of dollars become lost/stolen (see Madoff, Holmes, your choice of weekly crypto scam), or until planes start falling out from the sky because of a newly-developed "feature" (see Boeing 737 MAX), ... the list goes on forever.
We live in an era of mediocrity disguised as a (fake) meritocracy, with all the consequences it implies.
No it probably won't end. No you are not a dinosaur, but development has changed. Decades ago you could build an app from the ground up. This gave you a bunch of different layers to compose, and probably made the app overall simpler. Now most developers are given a box to create their feature in whether it's a Spring Bean, a React Component, or a serverless function, with an entire application stack under it. You could have each developer or team manager their own stack, and thus get microservices. But now you have just pushed your complexity from a single monolithic app to the space between all the individual apps.
At the end of the day, naked tech has no value. It's only the end-user features that make money. The industry has optimized for this, and that is what gives rise to all these seemingly insane practices. They aren't great from a pure tech perspective, but they help speed up feature development.
Paradoxically, the community still values naked tech much higher than end-user features. That is why the community heroes are those who write kernels and frameworks. So you have a million developers all trying to "make it big" with their bespoke framework. 99% of these go no where, but even that 1% that gain some traction just add to the constant churn of `inventing "revolutionary new ways" of doing the exact same thing.`
I sincerely doubt all that layered complexity helps push features faster. I left a next.js React job a couple of months ago where the team spent 90% of the time dicking around with the setup and wrangling with dependency issues instead of delivering any useful features. The amount of time they took to set up their "stack" and the number of issues they had with it was mind boggling. I left and swore to never again come near that mess of an ecosystem.
That's a great perspective, but are those peeps writing the unused frameworks just wasting their time to solve a question nobody has asked?
Take the D language, it's basically a poor man's Java, with a shoddy garbage collector and aspirations at being C/C++... Is that the work of heroes or the misguided?
Can't it be both? or none of the above? I don't know anything about D specifically, but effort spent here is likely to be valuable elsewhere. Either because of techniques learned, or approaches validated.
Even if D never catches on, folks will learn from what they've done. And the folks who did it, will likely be able to get jobs in the field. I doubt the effort is truly wasted.
Are they solving a question no one asked, or a question that already has an answer? If there is already a clear answer, then it's probably a waste of time. Your answer needs to be better, and that is rare. When your answer is better, then its a paradigm shift to some. To others the answer is no better, and then it's just the infinite reinvention.
It's even rarer to answer a question that no one has yet asked. Then you are a revolutionary.
I don't even know what to respond to the last assertion, because it's "not even wrong".
Like, just about the only thing D has in common with Java is their shared C syntactic heritage, and an object model with single inheritance. But that also describes numerous other languages.
Precisely both, since no one agrees. At least money can be used as a fitness function sometimes, with games and not with enterprise. The idea is that if D were that much better people would make more money by choosing it over Java.
That's an interesting and valuable perspective. I'm definitely one of the people who keep reinventing things that have already been done before, even though I'm not very good. But since I don't use programming to put food on the table, maybe that's okay - it's more of a hobby/educational endeavor than anything important.
All the same, maybe if I spent less time writing, e.g., yet another dictionary/map, I could actually make something worthwhile.
Putting aside the downsides and reasons not to use Electron, this statement is just false. They are not "constantly crashing." I currently use at least Figma, Insomnia, Slack, Spotify, and VS Code and these apps rarely crash, if ever.
I agree that these apps rarely crash, in the strict sense of the word. I don't think I've ever seen Slack get into a state where it's misbehaved to the point where the broader operating system has seen fit to kill the process. However, I have seen Slack get into weird states (like not updating read/unread status of messages, waiting forever for content to load, not showing notifications, etc) where the only way to fix the application was to quit it and restart.
So no, Slack (and other Electron apps) don't crash. But maybe they should. A hard crash (and automatic restart) is often better than the slow creeping insanity that seems to afflict Electron apps when their internal state gets out of sync with itself.
I mean, are any of those "not crashes" exclusively related to the domain of software written in a single language? Or is it a case of complex systems and not having AAA+ developers working on it?
> (like not updating read/unread status of messages, waiting forever for content to load, not showing notifications, etc)
The memory arguments are valid, but I don't think I've ever once had an Electron application crash on me. Contrasted with older native applications that would crash on a weekly basis.
Every well written app has multiple processes that can offload work to them and simply restart that single process if it crashes. Even Slack is running a number of processes so even if one does crash it simply restarts without affecting the main UI process.
VSCode never crashes for me.
Slack crashes rarely for me. More often, it just becomes unusably slow.
Spotify crashes multiple times a day on me. The music keeps playing however, so it's only an issue when I want to make a change.
Is it possible to use the "windows" version of the Spotify electron client on Linux? That might make a difference since the Linux client is unsupported.
It's possible that they crash constantly on that person's computer, especially if they have a bad power supply or other hardware. The problem might just be their own.
I'm not a fan of electron, heck, is go as far as to say I don't like it, but those statements are just unfair.
I run a bunch of electron apps on Linux every single day of the week for years now and I honestly can't say any of them have ever crashed.
Yes they may take up a bunch of RAM, but how many software engineers have less than about 16GB, and what's that RAM there for, if not too run my tools?
I can't say I've ever gotten over a few gigabytes of usage in a normal work day (not running any VMs for example). I don't use Chrome nor am I a tab-hoarder.
Whether I like electron itself or not, the told built in it seems to get the job I need done just fine. Who cares if it takes 3-4 second to open, once per day?
I used to do programming assignments on an Eee PC with 2GB of RAM, running Xubuntu and using Emacs as my editor. While modern netbooks generally have 4GB, and cheap laptops 8GB, it's not guaranteed that a student will be using a modern computer.
Dev environments that waste RAM keep people from learning how to program, and disproportionately affect anyone using cheap hand-me-down hardware.
I've experienced multiple copies of Spotify on different machines getting forever stuck just showing a blank screen when opened. Although that's probably more Spotifys fault rather than the tools used to build it.
Spotify does not like it when the internet connection is spotty or not existing during startup, even when you put it in offline mode before, or when your corporate VPN null-routes the Spotify IP range.
> In the past IT people, whether we're talking about programmers or something else, were very clever people. People with a high level of intelligence that took serious pride in doing things in a meaningful and pragmatic way.
This is true. They were exceptional, and there were far fewer of them. They worked in organizations that tightly controlled intellectual property and don't share their source or methodologies for free with the world. Some of these people had an issue with the barriers to entry and did something about it, democratizing access. They did crazy things like open source their work and evangelize an open source ethos. They created more accessible programming languages. They created frameworks based on methodologies that they learned the hard way. They changed the world.
After a solid 25 year run of it, we are in a period of great technological abundance built on sharing knowledge and technologies. There is so much available now. Yet, we are in the infancy stage of developments that can barely be imagined. Whatever seems like madness now is nothing compared to what is to come. More languages. More frameworks. More ways to solve the same old problems. More abstractions built on abstractions built on abstractions. And, occasionally, there will be force multipliers.
As I write this, I feel energized, not demoralized, about where we are and where we are heading.
I'll offer a counter, perhaps, unpopular opinion. I too, have found myself at times mentally fighting against what appears to be a tidal wave of modern software techniques. But I don't do that anymore, these days I try to embrace them using an ecological perspective. They won't all be great ideas, some will prevail while others fail, some will even survive well while at the same time layering complexity and loss of performance on the industry. However, I now see them all as being necessary experiments to further the technological advancements of the industry. Even if they are bug ridden, security nightmares, they all work to provide selective pressure and refinement by the industry in the aggregate. Upon close inspection, it looks like a mess, but if you back out and see it akin to our own bio-diversity and natural selection processes, you may begin to appreciate the wide variety of techniques and talents we have to choose from and that over time we should expect a refinement of our abilities.
The company I work for hired several incompetent devs who wrote an abortion of an Electron client. It very nearly killed the company. I finally convinced the boss to let me rewrite it in Qt, and now our feedback is far, far more positive on the UX. Qt and GTK+ are one of those things that work "well enough" on the big three, especially Qt (GTK+ has serious theming issues on macOS), and it's hard to justify Electron as long as Qt or at least GTK+ exist.
Fuck Electron. I've seen what it can do to a product firsthand. If you get a competent web dev, it might work, but it will be slower than snail shit.
Just write it in Qt. You don't need to use C++, Qt has bindings for many languages, including Python, Rust, JS with Qt Quick, and even Lua.
Myself, I don't mind modern C++, though in many cases I prefer Rust.
Your argument seems to be that bad developers write bad software? Isn’t that true no matter the platform? I have written equally successful software in both Electron and Qt. In my experience, a competent developer can write great software given almost any platform.
While that may be partly true, Electron inherently attracts low-skilled, incompetent programmers due to its 'easiness', while in reality JS isn't that much easier than, say, Python. Not only do you have to deal with dumber programmers writing the code, but what people forget is that CPU and RAM usage isn't just some metric you see in htop or Task Manager. Electron apps are much slower than native apps at doing almost anything, sometimes to the point of obscenity. That was a big reason we were forced to ditch Electron, the WebKit bullshit was just too much for what we were trying to do. No matter how many bugs I fixed, it was still a bloated sack of shit. Qt was easy to write. Sure you had to put a few #ifdefs here and there, but for the mostpart, it was portable, easy, and Qt Creator saved us a ton of time compared to baking our own. If you write any application in "native" widgets for the host OS, shame on you. That is definitely the wrong way to do things in almost every case nowadays. That said, Qt and GTK+ provide so much better of a user experience with only slightly increased difficulty. It's hard to justify Electron.
> If you write any application in "native" widgets for the host OS, shame on you. That is definitely the wrong way to do things in almost every case nowadays
Could you elaborate on that? Isn't using the native GUI API of the OS a relatively sensible default, especially if you aren't targeting multiple OSes? If you definitely want cross-platform support, I can see the argument for choosing one of the cross-platform frameworks but I still don't think that means using the native GUI API is such a bad choice.
Qt is a great choice for writing multi-platform desktop apps. It would take a lot longer, and add unnecessary complexity, to write the UI specifically for each platform.
I suppose for OS utilities targeting a specific platform only, that makes sense. But for portability, you sure as shit shouldn't be trying to write a custom GUI using the native OS GUI toolkit for all three platforms. It's surprising how many make this mistake. It's like they've never heard of Qt and are surprised development takes forever.
The only problem I had with Electron was the slow booting time. Other than that it worked fine. I didn’t see any more bugs than I see on any other platform. Perhaps the problem was that you are very familiar with Qt and not with Electron? I am not saying that Electron is perfect. I don’t think any platform is. But it seems to me that (again) the problem you are describing is that bad developers will write bad code on any platform. And yes a lot of inexperienced developers know JavaScript so they are attracted to Electron and write bad code. Completely agree. But I think it is unfair to blame Electron for that. Just saying :)
I think this rant is going against the irrational outcomes of a few things
1. People tend to put excessive hope in silver bullets, quick fixes, and the easy path. "Discipline" is generally shied away from.
2. It's an attempt to save the more expensive thing (engineer time) for the cheaper thing (RAM) especially in the face of rapidly improving hardware (which may no longer be true).
3. As engineers become expensive, people who have little interest in the subject become sufficiently motivated to hit the minimum bar, bringing down the average talent/quality of the pool. I am daily shocked by the number of engineers that do not understand the differences between implementations, and how "good enough" for a single scenario is not "good enough" for a complex and unpredictable future. Your 1 month implementation may be "faster" on paper, but if we consider the implementation time until yours and mine have acceptable robustness under duress, acceptable bug counts, predictable outcomes in unpredictable scenarios then my 6 month implementation will beat your "requires rewrite" (quarterly) by years... But never mind that, the engineer with the crap impl will "save" the company $100K a month in opex when he cuts the inefficiency in half and be promoted to architect. Meanwhile the engineer that does an efficient impl, maybe slightly over schedule will languish because there isn't even a 10% gain to be had, or the thing runs so stably the project is entirely forgotten about for years (because it just works) ...
So yeah, Electron is what we get when we pursue logical fallacies with little to no accountability for the true outcomes, and no rewards for excellence and subject mastery. Yup I'm a dinosaur too.
> It's an attempt to save the more expensive thing (engineer time) for the cheaper thing (RAM) especially in the face of rapidly improving hardware (which may no longer be true).
It's worth noting here that Electron trades engineer time you pay for against RAM your customers pay for.
On the other hand, Electron got things to a state where Linux is often supported and where I can usually run the whole app well sandboxed in my browser. So in the end, it's not that bad for end-users like me, either.
it is an interesting point though. Like I've tried to think of elegant solutions where we can get the customer's browser to preprocess work for us so we can reduce our total lambda/ec2 time. for example one could have their browser generate the thumbnails for an uploaded photo instead of just uploading the single image upload. After all, 1000s to millions of browsers can make short work of something that might be heavy on AWS.
Sadly I've yet to find many FE engineers that are all that experienced with WebWorkers
Someday this could be a good use of all those free CPUs
Reminds me how in one online multiplayer game developers decided to offload the decision of "who's the winner of the match" to client machines. Because there were some problems on server side and they couldn't do it reliably there.
Developers said this voting system worked surprisingly well. Theoretically if there were >50 percent of malicious game clients they could all vote for a wrong player but this was hard to accomplish, it would require a coordination of randomly-matched players.
(eventually developers did fix server-side code and removed this system)
My observation is that 1. & 3. in your list are very closely related.
> 3. As engineers become expensive, people who have little interest in the subject become sufficiently motivated to hit the minimum bar, bringing down the average talent/quality of the pool
The piece is kinda whiney and comes off like "get off my lawn!" and doesn't really add much beyond the myriad of other complaints. Thing is that I don't really think that many people love developing this way, it's just that nobody will pay anyone to shovel our way out from under this mess of technical debt. GUI development on windows is a schizophrenic mess and then you want to be cross platform? The only options are Qt, Wx, and Electron... or imgui... or lispworks CAPI or something. Electron has permissive licensing and you can hire JS devs and most importantly you can externalize a fair bit of the debt onto the end user as power and compute resource consumption.
It's just like everything else in today's economy -- incredibly short sighted and whatever you make will either evaporate or be someone else's problem in short order. But, that's how you behave in such an environment! You'd get fired for writing a cross platform toolkit if the the expectations are set through the current climate of shipping shit apps quickly. Apps that function just well enough to retain a subscription or shovel ads or mine data or upsell or whatever.
You'd have to change the entire reason why people are paid to write software to fix this. People aren't "stupid," they're on average lowish skill (JS bootcamp to first hire...) and behaving rationally under the incentives.
I feel this so much. The direction web development has gone in is so insane to me. The complexity that has been introduced feels like complexity for the sake of complexity. In my opinion, the cons far outweigh the benefits. But I would never say this out loud since it feels like everyone else has drunk the flavoraid and is dismissive of anyone who is not on the bandwagon. I’m just thankful I can develop my own projects without the insanity and am counting down the years until I retire and never have to create a webpage using a build system (!!!) ever again.
Its a good rant, even if some of the details are wrong. I think what he's really getting at is just that software is flooded with bad developers. Its only become more true as salaries go up and everyone flocks to the field.
By bad I DON'T mean junior or inexperienced. Bad is someone who really doesn't have any intrinsic motivation or drive to improve their craft, or worse doesn't care how it impacts the world - they're in it exclusively for the money. It really sucks working with people like that. Like this is literally what this entire stupid orange website used to cheer for 10 years ago when recruitment and VCs all talked about hiring for passion, its why github contributions and side projects and an interest in tech were valuable signals before they became another metric to be gamed.
Tech really does seem to be increasingly full of mercenary types now. People who give 0 fucks about ethics or integrity. Don't care what their employer does, don't care about their craft, don't care about working multiple jobs in secret. Just robots you put a coin in and turn the crank to get some code out. Github Copilot has admitted to using all Github code, regardless of license. Mass theft. But some robots built that because they were paid enough.
I do blame the player. "Dont blame the player" is a bullshit copout. Players are all there is. If no one builds evil shit, it simply doesn't get built. People who either learn to sometimes put their self-interest below something bigger or robots who are easily bought. Concretely, candidates can choose not to enter a field they are uninterested in or make an effort to develop an interest, and managers can design hiring pipelines that test for these things.
This attitude achieves nothing but quickly becoming that guy as far as peers and management are concerned (assuming you can even get through the door to begin with)
I feel like my attitude has a chance of creating change, whereas the attitude of going along with whatever as long as you get paid is guaranteed to change nothing. And yeah I have worked at and gotten offers from FAANG and adjacent startups with hard interviews - competence has nothing to do with it.
Interesting that the Unix command line is still just as functional and useful as it was 40 years ago and yet the UI frameworks have been thrown away and replaced every 2 or 3 years. My personal experience with UI has been Openwindows, X/Motif, Java, HTML/CSS, pLain old Javascript, ExtJS, jQuery, Dojo, Angular - various incompatible versions, and React. Glad I finally got to retire!
This is why I get paid top dollar as a consultant ;) When a company's top engineers decide to rewrite their stack in JavaScript, I get to swoop in, take $10k and tell them their old PHP monolith was just fine.
It's amazing how (much of) 'senior management' is actually just stopping / not letting people doing dumb things. Turns out it's pretty much a full-time job just stopping silly things and keeping entropy at bay.
With a full-on rewrite, it's mostly because said top engineers have realized that their application has grown from tiny humble beginning to a behemoth with stuff bolted on everywhere, remnants of no-longer-used components never been removed and the likes, while their superiors won't ever grant budget for actual software maintenance (e.g. remove old cruft, update that Symfony release or in some extreme case, actually use a framework instead of hand-written router and ORM code) but they would be willing to spend money on a complete rewrite.
What these "top engineers" need is a decent CTO, not people like you who come in and tell them to continue working on a tech stack that they have despised to work with for many months now.
Incompetent upper management will only lead to resignations out of frustration, not to progress.
Yes, agreed. But do you know how rare good CTOs are? I do, because I've consulted for more companies without a good one than worked at companies with a good one.
A good CTO is like a good CEO. They have to be involved in the nitty-gritty sometimes. Stepping down and up is a challenge that not many engineers possess.
These are the tools that promote the idea of keeping app state on server and client/server talking HTML, not JSON. If you're not yet familiar with the concept, you'll love reading Essays here: https://htmx.org/talk/ (scroll down and look to the right side).
It will end when people like the author drop their victim mentality and start putting their own ideas into practice. It seems that this person is content with yelling on the sidelines about how good and efficient things used to be.
Meanwhile, the shitty tech is winning. According to this author, bad code and bad tools and bad frameworks are reigning supreme over real engineering. Why?
"The situation is really bad for the industry." Also, why?
People with the mentality like the author don't actually want to build things. They just want to sit and complain. Their sense of righteousness and victim mentality gives them more pleasure and validation than actually engaging with the "modern" tech world.
Some other articles by the same:
- "Using a framework can make you stupid!"
- "So-called modern web developers are the culprits"
- "One sure way to determine if you are stupid"
- "SQLite the only database you will ever need in most cases"
- "No, your website is not a web app even if you call it so"
> bad code and bad tools and bad frameworks are reigning supreme over real engineering. Why?
Why? The (ultimately unsuccessful) quest for the silver bullet. Nobody wants programming, they want programs - so anything that promises to deliver programs faster looks like a holy grail. Inevitably, though, the promise boils down to a pre-packaged implementation of an existing approach that does something relatively specific, with some options for customization. If you want to step outside that customization, you not only have the assumptions baked into the new "silver bullet", you also have to understand all the nuances of the layers upon layers of other approaches (and all of their assumptions), to the point where it would be faster to just shed all the layers and do it yourself (but you can't because noooo, you're a dinosaur, you don't understand anything, it's the future, it's the modern way of doing things).
Which is why we ought to be building webs with Assembly.
Seriously though, there is an equilibrium somewhere, and on top of that a lot of stuff that is more about business and work than programming.
I may be getting out of my depth here but, am I wrong to say that a good deal as to why webdev in Java is even a thing is because there were people around who knew Java to begin with? That a good deal of people learning PHP did it just because Wordpress is a thing? That react native for desktop is about people with expertise in react native being able to develop for desktop?
Workers want paychecks. Society wants products. Workers will learn to make products so they earn paychecks and if they can spend less time learning they will, obviously, do so.
It’s refreshing to see this as the top comment on a forum where we are often guilty of this mentality.
I notice that this mentality isn’t as common in circles of people with their nose to the grindstone building things.
Right now I have some game modding Discord servers in mind where everyone has crappy tools but is focused on building things regardless, and complaining about the state of things just feels trite, as if the entire ecosystem is responding “Ok, in Utopia things are better, but what are you building today?”
For some reason people confuse sour grapes with some sort of high brow eureka they need to dump onto the world.
> The browsers native language is HTML. HTML is a markup language and if you feed HTML to the browser it will very quickly render the page with what you give it. But no. Let's not do that. Let's instead feed the browser with JSON and then have the build in JavaScript engine translate that into a pre-formatted output which then gets translated into HTML before it's served to the user.
I agree with you. I agree with the author. I put my own idea into practice:
> Meanwhile, the shitty tech is winning. According to this author, bad code and bad tools and bad frameworks are reigning supreme over real engineering. Why?
The funny thing is, 20 years ago the same arguments were being made.
"Windows MFC is a nightmare to write against!"
"X11 is terrible, why are we using it?"
"Everything we write is built on so many abstractions developers don't know what they are doing."
"VB6 is dumbing down development"
"Developers writing in Java don't understand what is really going on."
"Java is too slow!"
OK the last one was true. UIs in Java were obscenely memory intensive relative to computers back when Java was first introduced.
But yeah, more things change, the more the complaints stay the same.
Java got its slow reputation mainly because any early encounter with a Java applet would proudly display the Java logo and then cause your computer and disk to thrash for 30 seconds.
Then Java developers heard this, confused it for "Java executes slowly" and so they never fixed it, until Java applets just became obsolete.
Internet complaining gets eyeballs. Look at this comment thread. Look at the position in the HN queue.
I'm not a fan of it. I understand the author's complaints and frustrations (I am also "chronologically-challenged"), but I have found that complaining doesn't really buy me much. I'll do a minor whine, from time to time (usually about the ageism in tech), but, for the most part, I try to bring some light to the conversation.
I write native (Swift) apps for Apple systems. I tend to avoid the "bleeding edge," as that isn't really where Ship lives, but I think that the way I do things, results in very good software (note that I didn't say "better software," as comparing against others only pisses people off, and doesn't buy me anything -unlike a lot of folks, I'm not in competition with anyone).
Of course, that doesn't prevent others from sneering at me, but that's their burden. I'm not exactly sure what benefit it gives them, as I am no threat.
If people like the way I do things, and want to improve that, then I am often fairly helpful, but I am not really into standing in the street, yelling at passing cars. I have better things to do.
Swift is only 7 years old, one of the newest languages/tools in existence. This is an implicit argument against the author's case.
One might perhaps argue that Swift's newness/modernness is an exception, that it's "one of the good ones" when it comes to new systems. But this doesn't really work. Using Swift for native GUI app building is platform-specific, and writing native apps for other platforms requires other tools. If Swift represents an improvement on what came before, it implies that such improvements are needed on other platforms also. (If it doesn't represent an improvement, why are you using it?) This also implies that cross-platform solutions might be useful - like Electron or React Native!
Being chronologically challenged doesn't actually prevent one from understanding the driving forces behind modern software technology. Certainly, fads and cargo cults exist, driven in part by people's inevitably incomplete understanding and desire to follow practices that others seem to be using successfully. But to be able to distinguish between the fads and the useful advancements requires a better understanding than the OP exhibits.
Complaining seems like a perfectly fine activity to me if there's some valid content in it. But it becomes fairly useless otherwise, except possibly as you say for driving social media engagement.
New is not necessarily bad. Old is not necessarily good. It all depends on what we are doing, and what our goals are.
I also use PHP to write my backends. It works nicely, for the scope of my needs. I’ve been writing in that, for over twenty years. My servers work quite well, but aren’t particularly “buzzword-compliant.”
I’m not really a server programmer, though. If I need a backend to be more intense than what I’m capable of doing, then I’ll hire someone to write it, using a more robust stack, and I need to be prepared to open my wallet, as I have high standards. Good engineers will always cost more. They’re worth it. They will frequently also have gray hair; regardless of the tech.
Alternately, it becomes impossible, by analysis paralysis, to decide what stack to learn and build from. Which one is safe? Which one is secure? Which one has legs?
Which ones do you dedicate your precious time to as a direction to keep earning a paycheck?
There are also great projects like Hotwire that are gaining momentum that seek to actually address some of these problems of never ending layers of complexity. I think more constructive posts advocating for things like that would be more productive than the 1 millionth "SPAs are too complicated" blog post.
Most of these solutions that the author complains about came about to solve real problems, and without addressing how we can continue to solve those real problems with a simpler set of solutions, it's really just noise.
Veterans then to gloss over the fact that most "evolution" in programming revolves around programmer experience, not around the use of resources.
The bad side is because passionate young people come in hordes and they will gladly program anything for a penny. For the company is cheaper to hire a team of those youngsters and let them make your program in every (ugly) way they can, rather than pay a fortune for a veteran programmer that takes their time to deal with code that manually handles memory.
The good side is that you no longer need a computer degree or 20+ years into programming to make something decent.
>Meanwhile, the shitty tech is winning. According to this author, bad code and bad tools and bad frameworks are reigning supreme over real engineering. Why?
Because in the time that OP spends tracking down segfaults and getting a dozen native libraries to compile for his native app, the JS dev has already finished XYZ feature. And because the environment is so high level, he can use the extra brain capacity for things like UX, business logic, and accessibility (things people actually care about) rather than meaningless implementation details.
This stuff won out for a reason. HTML/CSS/JS is literally a 10x improvement in iteration speed versus native.
I've been out of the GUI scene for a long time, but I will say I've been hella-impressed with some of the more modern Qt tech.
QML gives you JS for the view layer (and some logic if you desire), and a declarative way to lay out your UI. I think it's worth a comparison as a reasonable way to quickly iterate on a UI while having something very lean and portable as the output product.
Qt is a mix of LGPL for the core, and GPL for some of the "value-add" modules (for which there's a commercial license available for those who can't live with GPL requirements).
> Compared to native development on a single platform? Not even close. HTML/CSS/JS development is painfully slow in comparison.
I find it's often not, largely because the amount of focus on web apps means the native frameworks often are less productive by comparison than what is available for the web.
i don't see why i'd ever spend time on any particular platform when i could avoid it. I don't like any of them, but i still want my software to work on them.
>Meanwhile, the shitty tech is winning. According to this author, bad code and bad tools and bad frameworks are reigning supreme over real engineering. Why?
Would it have anything to do with coding boot camps teaching people how to code specific to these frameworks vs deeper learning into broader coding?
Yep. When I saw the tired "Electron = bad" rant I immediately went to look for a tab entitled "Projects" to see if the author had created a native desktop app with "value"[1] equal to or greater than what a good Electron app provides (Slack, Spotify, Discord, VS Code). But lo and behold, no such tab.
[1]: [Electron apps] constantly crash and has no value over a native desktop application what so ever
All those examples could or do work just as well as web apps, right? I don't think that's the authors message, but I'd rather have those type of apps be web-based instead of electron-based or native.
They are web apps... open.spotify.com, discord.com, app.slack.com. Even vscode.dev. The desktop product is just a repackaging of the web app with perhaps more OS integrations. If you don't want the OS integrations, use the web app. Some are available as a PWA too, which provides a nice middle ground.
They all have artificial restrictions in the web versions which makes them worse to encourage people to install the apps (actually not sure if discord does this, I've only used it a little bit). For spotify and slack there is definitely an active push to make their web apps worse than they need to be.
At least for VSCode, it's the other way around - the app was first, and even if it was always an Electron app, it was still designed and written for the desktop. It actually took some time and effort to port that to vscode.dev, which wasn't there at the beginning.
I mean, I don’t necessarily disagree with the main point, but this feels like a short argument with straw-men and poor faith.
Personally, I think another commenter got it right pointing to commoditization (interestingly I find the commoditization of computing in general has greatly decreased my interest in it, but that’s a me problem), though I always find the ire towards web dev as the root of all evil to be a little too strong. There’s a lot of bullshit, but I see bullshit in plenty of other domains as well. Web development just has the status of being the largest platform at the time.
Also over abstracting is bad, so is over/pre-optimization and frameworks can add a ton of unnecessary overhead, but y’know, something about hammers. That being said, I’d rather inherit an overcomplicated Laravel application from someone that an overcomplicated raw-PHP monolith.
> I’d rather inherit an overcomplicated Laravel application from someone that an overcomplicated raw-PHP monolith.
Similar sentiments. Say what you want about overengineering and React/Redux with strong opinions™ on the paradigm and insane transpile / build / asset-transformation pipelines... I'd much rather have that than JavaScript code-snippets written to the page in the middle of nested for-loops (inside a database query result iterator) in HTML::Mason (i.e. "Perl pretending that it's PHP"). Which has happened.
No, the madness will never end. Once you've been programming in the industry for awhile, and have seen a few fads come and go, you start to get fed up with the churn and just want to do things in the simplest way. Electron? Bah, I'll just make native apps for Windows and Mac in C++ and Swift! React? Bah, I'll just write a simple pure-JS app as a .js file.
Because you know that in 10 years you'll have to un-learn all of your Electron and React skills and re-learn something new. Equivalent, pointlessly new, but new nonetheless.
The problem is, nobody is going to let you do that. Chasing the new shiny thing is, for better or worse, part of the software industry. You will always have to learn something new that does the exact same thing the thing you already knew how to use did 10 years ago. The best you can do is find some company that's stuck on some previously-cool technology, but even then they'll want you to rewrite it in the new thing.
This article reminds me of when assembly programmers were ranting how stupid it is to use the bloated C language to create apps when assembly is perfectly fine for that, and how it's all driven by lazy programmers who don't want to learn how to write efficient code.
Very true. Though early C compilers were pretty crap, and old machines unbearably slow. It's easy to look back from a modern context, but in the early days those arguments had some merit (at least at first).
I find it important to keep that in mind when arguing against shiny/heavy/silly new tech in the present! Things like virtual machines, blockchain, new lang's, etc. In the era of the mainframe, 'microcomputers' were jokes, until they definitely weren't.
Or hardware engineers ranting about how software is bloated when you can achieve the same thing by arranging some transistors. For example, Steve Wozniak created a working breakout game using 44 transistors. This was 1974, back then each transistor costed enough money that companies paid a lot for low transistor solutions.
Today every computer has many billions of transistors.
It's not going to end until companies realise the waste or there are enough developers to satisfy demand.
Right now developers are in demand, they can charge a premium and they need to justify a career by filling their resume with accomplishments.
OSS (unmaintained or overengineered is fine) is unfortunately one of these.
This problem would be greatly diminished if we had small companies delivering value and getting paid based on that - thriving or ceasing to exists if needed. Instead big corps lead the way and pay politicians to complicate regulations and keep the status quo.
The bigger the corp the more it resembles a government: inefficient, full of useless layers and completely detached from actual performance.
Most engineers will live all their career in a place where their input doesn't influence much the success of the company and they will be awarded plenty of time to dedicate to new non-innovations.
> The entry barrier to programming needs to [be] high! Programming is engineering, it's not something where you throw stuff at the wall and see what sticks
Sadly this is the fallacy of all educated people: “… is <what I’ve studied > and gained an excellence in, not throwing stuff at the wall to see what sticks”.
The reality is human improvement comes in two ways 1. Repetition for excellence and 2. Exploration for innovation.
Innovative exploration has always been a stochastic walk by relatively educated people into the areas that are their knowledge gap. Even physical exploration is only made safe through mental exploration first. e.g. All of astronomy prior to space flight.
Even as someone most interested in excellence, you’re better off helping foster/improve exploration than trying to hold it back.
> Sadly this is the fallacy of all educated people: “… is <what I’ve studied > and gained an excellence in, not throwing stuff at the wall to see what sticks”
No, it's the aspirational view of all insecure elite workers: “the job I do must have a high barrier to entry (otherwise, I will no longer command a premium salary)”.
> In the past IT people ... where very clever people. People with a high level of intelligence ...
I normally don't mind typos and would never comment on a misspelling like this. But I also cringe pretty hard at people lamenting over how dumb people around them are. The fact that this kind of rant is so full of typos is fairly appropriate.
I noticed on his "About" page that he says that English is not his native language. So I'll cut him a break on spelling! I wrote to him with proofreading changes including this one, and he has now corrected them.
What I don't get about the rant is why it matters what the "modern" way is.
It's not like any of the old tools have disappeared right? If you don't like Electron/NW, nobody's stopping you from building native apps with Qt, GTK, or you can just draw your own UI raw! If you don't like frameworks with their own templating systems, maybe don't use them? I don't like them either! and yet rather than complain I just don't use them...
Abstractions by their nature make some use cases easier and others harder. They trade off efficiency for familiarity. If an abstraction is "popular" it just means many people appreciate its benefits over its limitations for their particular use case. Being opinionated is the whole point. If you don't like it or if it doesn't fit, don't use it.
Gatekeeping in general is bad. Dismissing a modern framework because it's "too high level" is rather silly. On the other hand, dismissing someone only because they're not using the "modern" stack (whatever that is) is equally silly.
Responsibility stops me from building applications and services using obscure tools or writing my own tools. Because it'll be rewritten by later developers using commonly-used tools. There's certain expectations. You don't write your own logger library in Java, you're using logback or log4j (despite them being monstrosities). You don't write your own framework or use some obscure framework like Helidon. You use Spring Boot because everyone uses that. If you'll write your GUI application using Gtk, it's likely to be rewritten with Electron just after you leave the company and next developers will not be able to speak C because they just finished some JS bootcamp and just went into industry.
I can use assembly language for my own projects if I want. Nobody cares. But for code that I was hired to write, I'll use most appropriate approach and usually that's a most popular approach. Even if I'd love to just write some SQL, I'll bring Hibernate and meddle with its configurations because that's what everyone uses and knows (and hates) in Java. That'll produce maintainable software that won't get rewritten tomorrow because you used arcane tech nobody understands.
Using non-conventional approaches might be good for job security, though, I guess.
> Because it'll be rewritten by later developers using commonly-used tools.
Everything will eventually be rewritten so this isn't much of an argument... What's "common" today won't be that way forever, likely not even for a couple of years (which is the whole point).
There's no reason to use a particular tool because it's common. A proper developer uses the tool that is right. If that happens to be old, so be it. An engineer should be able to decide what to use based on merits of tools not just picking up the new shiny.
Not really. The problem with old versions of apps is that they don't support anything newer (standards, protocols, file formats etc). And then there's security vulnerabilities.
But that aside, when picking between different pieces of software, one of which is Electron, and the other one is Qt, I'll always pick Qt unless the Electron one is drastically better feature-wise (haven't seen that in practice yet).
You also seemed to be focused on only web development. I would argue that "IT people" are not limited to web development -- and that you're overlooking the entire maker movement (including Raspberry Pi's and Arduinos), the breathtaking development going on with self-driving cars, and the development still taking place with AI.
But as always -- follow the money. Old school engineering shops got their money from the military (the internet is thanks to DARPA, after all). Web 1.0 got it's money from selling physical goods. Web 2.0 got it's money from venture capital and digital goods. What are we selling now?
I fully relate with this article. Especially this point:
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry? Has everyone except the "unix greybeards" gone mad!?
The complexity adds brittleness and security vulnerabilities to software. Essentially every popular tool these days is over-complicated. Also, the fact that the industry has evolved into some kind of monolithic hivemind has had the effect of forcing everyone into these brittle, over-complicated technologies and has censored much of the nuanced dialog around software design in favor of blunt dogma.
Thank you for articulating this better than I could have. We had HTTPD...Then Apache2, then nginx...and a guy here is messing with GPU visualization and introduced me to caddy...which is better, for reasons. And just today I saw gunicorn (which I thought was gun-i-corn for a moment) and I see it's been around for YEARS.
And I feel old. It used to be you secured a system when you knew every single thing that was running on it. Those days are long looooong gone.
And today, I had to go find Visual Studio 2019 because hashcat needs CUDA needs VS19 and Microsoft REALLY wants you to use VS22, but CUDA doesn't support it.
It's called "commoditization". Since programming is now a commodity, the barrier to entry had to be lowered in order to pump up the numbers. Growth at all costs!
There is still rock-solid engineering to be found, usually in domains where the stakes are high (for example, fintech), but anything web-related is best kept away from if one is allergic to bullshit.
This is a bad article, but it's a great example of why tech like Electron have succeeded.
This article is a long boring rant with zero insights, in which the author never thinks to ask why...
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry? Has everyone except the "unix greybeards" gone mad!?
... ok he does use the word "why" once but only in a rhetorical question. It's never asked honestly.
Electron solves problems that "unix greybeards" have been loudly and wilfully sticking their heads in the sand about for decades. Yes, it solves them badly, but that's a lot better than not solving them at all.
... he also complains briefly about PHP templating libraries which tells me he's never had to mitigate XSS vulnerabilities in large PHP applications.
I specifically said "large PHP applications" instead of just "PHP applications" because the difficulty is not in writing, it's in auditing and enforcing.
Even within templating libraries there are different levels of advancement around context-awareness: e.g. <div onclick="{{ val }}"> is still dangerous in languages like Twig, and is similarly challenging to audit. Alternatives like Latte mitigate this but even Twig is still a large degree easier to work with than vanilla PHP.
"Large PHP applications" doesn't immediately express to me that you combine styles, scripts, and markup in your templates and have a need for automated context-aware escaping.
"Large PHP application" just means "at scale", in terms of LoC, and - most likely - contributors. Which means (a) potentially anything may be combined in templates by somebody at some point without careful manual code-review, which is difficult to audit at scale, and (b) there are a larger number of files to audit, likely with a degree of complexity to their overall structure, which further adds to that difficulty.
While I agree with the overall point, these are different environments.
React normally operates in a DOM. It's templates are translated into javascript, and that javascript manipulates the DOM. PHP templates are just outputting strings.
ReactDOMServer also outputs strings; it just works with context-aware objects during template processing.
The point here is that PHP templates work with strings as a design choice: there's nothing about PHP as a language that's preventing you from taking a similarly context-aware template-processing approach.
While slightly hyperbolic, the author is going after a real enemy, the tendency use layers of abstraction where they might be overkill. In a similar vein, I wrote a comedic satire about the incredible overuse of Docker/Kubernetes:
It's a good rant. Basically, our computing foundation is from the 70s and we keep stacking crap on top. Given hardware advancements, it's embarrassing how we're making the user experience worse, not better.
What happened is speed. You need to ship fast or you're dead. The need for speed is not just because of commercial dynamics, also because of another major industry failure: programmer productivity did not scale up.
2 decades ago it was expected that by now we would be on 5G languages. Where you basically are modelling applications instead of coding them. This never actually happened, we're mostly at 3G still. Which is relatively low level grunt coding, error prone, repetitive, etc.
In fact, sometimes we even sink below 3G. Recently Chrome shipped a feature that allows a developer to tell whether the internal layout engine should regularly update a section of an HTML document or not. Whilst performance optimization is admirable, it really is kind of ridiculous that a developer has to involve himself in paint cycles and GPU stuff. There's even articles telling you how to write JavaScript in such a way that the bytecode compiler runs slightly faster.
What??
I'll continue with another example, to show my age. In the 90s, I used Borland Delphi quite a lot. It's basically drag and drop UI building and then you connect code with events. Incredibly intuitive and productive. An the app would be lightning fast, much faster than today's "desktop" software running on hardware a 100 times more powerful.
30 years later, and we're hand coding for the web. There's no robust layout builder. No standard library. Not even a standard development environment, we have to cherry pick it together.
With productivity this low, and needing to ship fast, the only solution is to build another crappy abstraction on top of the pile of garbage.
Importantly, this often fails. It doesn't actually lead to higher productivity, rather even more complexity. Nor does it typically improve the user experience.
Why does it continue anyway? Because the person suggesting that framework never pays the price. When the choice starts to hurt, they are long gone. Some other poor sucker will be maintaining that ancient Angular project.
Which brings me to my final point: the profession of software architect seems to be dead entirely. It's just coders now, making seemingly random short term choices. But I guess that is the modern state of the world: extremely short term. Nobody does anything proper or fundamental anymore.
I don't understand how can you diss Electron when you see something like VS Code or Atom.
The benefits are clear. This software could grow plugin ecosystem vastly exceeding anything that was previously possible even after pouring millions of dollars into it.
Electron is better than everything else by various engineering metrics. Just not RAM or CPU usage. And despite large RAM and CPU usage it manages to be way faster than many native IDEs that came before.
It's surprising to me that a Linux/Unix enthusiast of all people is so incensed about Electron and web tech.
The alternative to Electron isn't that all their apps would be faster and more stable. It's that many of them wouldn't exist (either on Linux or in general), and the ones that did exist would inherently be more resource-constrained on the development side — potentially resulting in increased bugs, instability, security vulnerabilities, worse performance, lack of feature parity, and/or ultimately even premature deprecation to focus on higher-priority platforms. The web is a great equalizer.
I'll put this another way: given the option, all else being equal, would anyone honestly choose to run the same app in the form of WinForms + Wine rather than just Electron? It might be a little faster for certain apps in the best case scenario, but on average it's going to be worse in just about every way; that's before taking into account that in real life all else is never equal (see previous paragraph).
You have multiple cores and all of them constantly do somehting when your computer is on. You have graphics processors that run code every frame which probably half of is bot strictly necessary.
Is the cpu load from blinking cursor good or necessary? Definitely not. Is it important? Mostly not when compared to all the benefits.
And this issue was resolved pretty effortlessly (if you omit trolling).
I feel like someone could have written the same 10 years ago. "But why do we need something so inefficient as Python, we have C that does the job and is much faster". "GUIs? Such a waste of resources, CLIs do the job just fine and you go faster with it!"
At the same time, I'm also personally quite bored with the sheer amount of complexity we seem to put in everything those days. Layers of frameworks on top of frameworks, especially for simple projects. My friends who followed bootcamps lately learn React, not Web Dev. That is a problem IMHO in the Long run.
Getting a job straight out of a bootcamp is hard.
Getting a front-end job out of a bootcamp without knowing the most popular technologies is even harder.
Bootcamp's interest will always be to increase their students odds to be hired rather than foundational knowledge
200% behind you! My point was not to shoot at bootcamps. (Though currently in Europe my friends working in bootcamps have never seen such a fast hire rate).
Bootcamps follow what the market requires. My main point was more about the market we have created for ourselves :).
I don't disagree with the theme, but it would be much more persuasive coming from a Lisp corner of the world.
Just today, I was discussing with a friend about the lack of tail calls in mainstream languages (outside of FP), and how the typical reason cited ("you lose debug stack traces!") makes no sense under any scrutiny (the stack losses from looping are necessarily total, yet tail calls can easily be implemented flexibly to allow for optional stack growth in certain situations like debugging). So we get more complexity (iterative loops) because the pop culture of software development repeats irrational conclusions (apparently) without thinking.
Same goes for folk advice like "never rewrite" (okay, okay, it went badly for Netscape once, or something), "never build your own tools/languages" (yes, this isn't software, but the Empire State Building was constructed within a single year by a firm that constructed all of its own tools custom for the job), "role-based access control is necessary for security" (capabilities are actually quite cool), "always reset your password every three months" (lulz). There are many others.
I am starting to believe that behind every phrase of false but universally-accepted IT folk wisdom s a billion dollar idea waiting to be demonstrated; extra if you can combine a few together AND give the currently lost wanderers out there a bridge to the simpler, more enduring way.
This may be somewhat off-topic, but I think the point of "you lose debug stack traces!" is that with tail call elimination, you have more possibility for confusion from people who inadvertently write a tail call, or who write something tail-recursive without caring about the stack growth (e.g. if it only recurses a few times in their use cases). That's where the increased difficulty comes from; of course loops/state machines/tail-recursive functions are hard to debug either way. I guess basically the problem is that with loops you know what you're in for with regards to debugging, whereas tail-recursion is kind of implicit---you have to learn how to recognize a tail call, which is non-trivial, or at least non-trivial compared to distinguishing a loop from a function call.
> So, now your simple news article or blog post takes ages to load on a 1 gigabit connection and requires about 3 times as much electrical power even though you're only serving text and perhaps a few images
If all the website is doing is serving news articles or blog posts, then sure. But, web is increasingly used to build apps. Apps are different from documents. No, it isn't crazy to build apps that run in the browser; web has a fantastic distribution story. And apps can't be reduced to a markup language and a stylesheet.
I hear there's a lot of things "called operating systems" out there; nearly every computer seems to be running one. Couldn't one somehow run applications on those?
I call this: PLIRT - Pyric, Low Interest Rate tech.
Low interest rates drive money into speculative hands and then in into the salaries of tech cargo cultists, Resume Oriented Architects, Tech mono-hammer-wielders[0] and other people who should never be given the power to make decisions. This creates all sort of bad ideas and tech. We are in a monstrous PLIRT bubble ATM. Pyric because them winning means all of us losing.
[0] Every problem's soln is methodA/techX: the only one these ppl ever use.
I think the problem with this article is pretty simple. Confirmation bias plus an unhealthy dose of Juvenoia. He admits in his update that the jobs he takes are cleaning up after projects that have already been recognized for their failures. He also reduces younger developers to a very basic strawman.
He also doesn't appear to have recent experience being a developer that was involved with the first iteration of a project. Yes, things do get over complicated and need to be simplified later. There's always going to be someone getting over excited about using a new tool and insists on using it despite it being expensive and not meeting any additional requirements. The problem is his opinion is entirely based on hindsight and the fact he only gets work after that excited guy has either changed their mind or left.
He also doesn't clearly see the potential benefits of some of these frameworks and architecture. He definitely doesn't get the very basic idea that proper architecture requires choosing solutions based on the business needs. A small simple we pages don't particularly benefit from from complex architecture. Purely offline desktop applications certainly don't benefit from using react native, especially if it's doing anything complicated.
However, standardization of solution approaches across applications does have the benefit of making it much simpler to maintain any employee pool capable of continuing to develop and maintain their software. That's why when an internal tool is needed to manage semi complex data it's generally best to continue using Angular for the web ui and not suddenly decide to use Razor.
Some abstractions are worth it, some are madness. The only way to know for sure is to actually try it, and gain years of experience in a particular paradigm.
I built my first raw PHP and vanilla JS web application SaaS about 15 years ago. Some things have changed for the better since then, and some for the worse. Since then I've seen hundreds of frameworks, ideas, concepts come and go. Some of them, like Symfony/Laravel are a godsend which made my development much faster and easier. Other ideas, like React/Angular are unnecessary piles of complexity which add little value to the customer. I maintained a large Angular 2 app in production for several years, and after that built a startup on React, then I decided frontend JS frameworks and SPAs simply weren't worth the effort. Went back to HTML-only and a little vanilla JS, and the result was absolutely refreshing -- far less complexity, and the users can't tell the difference (it's actually faster and MORE reactive than most react-crap apps built these days). But I wouldn't go back to raw PHP, the abstraction layer that Laravel provides is well worth the performance tradeoff.
This is just the case. It is very difficult to know up-front if added complexity will actually save you anything in the end.
New concepts, frameworks or whatever are created for a reason. If you willingly adapt any new shine toy without thinking about what problem it solves you get what the author is ranting about: unnecessary complexity without any gains.
I think that the problem is that a great deal of knowledge (and good guesswork) is needed to make those decisions and even then it is hard.
Sometimes the removal of an unnecessary requirement will be the biggest factor in bringing the complexity down. But it is not often the case that this is obvious to the development team.
I do love to listen to how different companies solve problems from friends in the IT industry. Nobody is every on target in regards to "complexity used" vs "complexity actually needed"!
Genuine question that I've been asking myself for the past several years: In what senses is software engineering actually an engineering discipline?
If you make a project trade-off for the sake of code maintainability, is that based on empirically tested knowledge or following a design pattern guided by an artisan's intuition about how code will be interpreted?
This is a question I've struggled with ever since I started working in the field.
In terms of definition, software development ticks all the boxes - depending on who you ask.
Yet every way you look at it, something just doesn't feel right. Sure, there's best practices, models, a strong mathematical foundation, but on the other hand there's way too much freedom.
By "freedom" I strictly mean lack of limitations. While the laws of physics dictate what can and can't be done in traditional engineering disciplines, the same cannot be said about software.
A good design can be quantified, tested, and improved based on hard data (and eventual failure). A good program, however, is super hard to qualify. What even is "good maintainability" in terms of software? Is it a bunch of metrics like cyclomatic complexity, Halstead volume, and lines of code haphazardly cobbled together to form an arbitrary index value?
With traditional engineering you have material properties like tensile strength that you can match against requirements and safety margins. What's the equivalent of tensile strength of a software product and can we measure it?
For decades people in scientific computing have relied on Fortran programs that defy any modern notion of maintainability. The subroutines and "drivers" are limited to 6-character names and functions with >25 arguments are not uncommon.
Yet they still work, are still used and stood the test of time despite archaic properties [0], which require users to memorise large tables of codes for data types and routine names.
I also feel that despite these limitations, libraries like LAPACK are more readable and maintainable than modern C++ monstrosities like this [1].
Honestly I like well-designed modern C++ as a library/app developer; it's fast, type-safe, and (more or less) expressive. The worst thing about it is unnecessarily complex syntax/concepts (TMP scares me, `template<typename> using Foo` simplifies code, concepts are complicated but help avoid TMP) and slow compile times (unavoidable to my knowledge) I looked at nlohmann/json's homepage and I find the API surface perfectly readable (if you avoid ""_json and the more complex stuff, I didn't look at the library itself).
Similarly I think that fmtlib is a beautiful piece of software (fmt::memory_buffer is beautiful, but I dislike the mysterious template errors when you write code wrong, and poor documentation for how to avoid them). It's as convenient as C format strings (but type-safe, and thankfully uses fat strings composed of a pointer and length, or begin and end, rather than null-terminated strings with O(n) strlen), and much better than C++ streams, which hold string formatting state (setw, precision) which is wholly wasteful for binary file IO, and is evil global state (possibly TU-local) for console IO.
Modern C++ also a lot better than the 3+ layer deep inheritance hierarchies of 2000s C++, where Child::f() delegates to Base::f() which calls g(), which is a virtual function pointing to Subclass::g(), which delegates to Base::g(), which is alive and well in Qt and KDE code.
What bothers me about libraries like nlohmann/json isn't so much the style or the public interface (which is fine and I actually like, too).
It's the fact that a conforming and performant JSON parsing and serialisation library doesn't require 22kloc in C++. That's madness. I also struggle to call a library "modern" that uses goto-statements.
Especially if the library is even slower than smaller alternatives like [1].
But that's just my personal opinion, I understand that other people have different priorities and are fine with including 785kb of C++ template headers as long as the public interface suits their needs and taste.
This is just an example for the kind of freedom software allows for, because it doesn't cost you anything and there is no objectively "better" in this case.
I truly think it's more craft than engineering, and even that's somewhat generous. It's mostly guesswork, hacks, inter-team politics, and trend chasing.
Imagine an engineering team designing a toaster. They have real constraints to consider, like physically fitting all of the components inside a single package and selecting materials that won't melt during operation. They have to ensure that product won't injure the user or light their house on fire. They have to carefully consider the lifespan of each component and design the product to last for a minimum number of years. The consequences of failing to do these things could potentially result in lawsuits, recalls, injuries, or even death.
Now take the average web product. There are no real constraints whatsoever - almost everything comes down to opinions over aesthetics. Even the speed and reliability of the application is somewhat unimportant. Planning ahead is explicitly discouraged. Instead, software "engineers" are encouraged to be agile, move fast and break things, iterate based on feedback/complaints, and push out hotfixes when bugs occur. Software is expected to rot and be rewritten every 3-5 years in the latest language du jour. I could go on and on, but anyone who's been in this industry for more than a few years knows what I'm talking about.
> Software is expected to rot and be rewritten every 3-5 years in the latest language du jour.
That's simply not true in general. There are plenty of fields where this is not the case. Control software for airplanes, power plants, medical implants, military equipment, etc.; also banking- and insurance software, certain scientific HPC libraries, software for spacecraft (especially deep space missions) and so on.
The software world doesn't just consist of web development, cloud computing, and cheap consumer products :)
Ten years ago I did some consulting for a large merchant wholesaler and their backend ran on IBM mainframes. At some point I was maintaining CICS programs written in high level assembly with changelogs dating back to 1984 (after getting some coaching from the resident senior programmer first).
This was a multi-billion dollar business and the software had to work. 24/7, 365 days a year. Every minute the system was down cost thousands of dollars. If you managed to mess up the system controlling the warehouse logistics, monetary loss was the least of your concerns, though - a dozen angry truck drivers asking for the guy in IT who messed up their schedule was far scarier :D
So yeah, in environments like this, every little change or update had to be carefully planned and agreed upon by all stakeholders from all departments.
> Imagine an engineering team designing a toaster.
Heh... That supports TFA's point (if taken broadly as "old tech was better tech") in yet another way: Harking back to the famous Technology Connections YouTube video, the best toaster ever was conceived sometime in the 1940s or 50s.
Programming and Software Engineering are related but not the same thing. There is yet another thing - coding, which is also something else. Yes, software engineers do program, and programming involves coding. But someone who has come up with, say, a computational algorithm or a formula may ask you to implement it in a form that a computer (or, more likely, a compiler) can understand. That’s coding. Programming, on the other hand, is what we do when we need to solve a specific problem, or a part of a problem, by writing a program or a library of modules, which involves several steps such as choice of data structures and algorithms that make it possible to solve the problem efficiently. Finally, software engineering is building software from various, potentially many, pieces which include custom program modules as well as those written by third parties. The degree of “engineering” involved grows exponentially as we go from mere coding to programming to, well, software engineering.
“ It's so bad that I haven't managed to find a single industry with the same massive amount of stupidity, with the exception of perhaps the fashion industry.”
It's because the demand for warm bodies that sort of understand how to code is so extremely high that there is no room to discriminate sufficiently. We are all working with dressed up normies that have zero clue. It's not their fault however, and trust me, they are looking for the exit.
The piece that resonates most strongly with me is using Javascript to render HTML and CSS on the client side. Almost always, there is no good reason for this beyond lots and lots of Javascript programmers decided it was the right thing to do.
Like has been mentioned here already, its due to cost cutting.
JS Bootcamp graduates are easy to find and employ for relatively cheap. Experienced developers (no matter if self taught skills or classically trained) are fewer and cost more.
So most of the new and shiny tools and apps are broken on arrival, and often stay that way for a long time (MS Teams is for instance awful to use even to this day).
However there are plenty of wonderful applications out there that aren't being written haphazardly but rather by professionals that care about quality and stability.
The author should just select the tools he wants to use, and stay away from the rest. It is what I do.
>But, noooo, you're a dinosaur, you don't understand anything
Honestly.. Yeah.
Modern tools operating at higher levels of abstraction allow teams to create large amounts of functionality very quickly for more platforms. This changes the economics of software supply.
It's a fallacy to look at an app and say "I'd like this more if it were native instead of written in Electron" because Electron is part of how that app was delivered with that functionality at that price point. It's like ordering a $20 pub steak and saying "I'd prefer this steak if it were A5 Wagyu."
> In the so-called modern day it's like everyone - except a few - has dropped their brain on the floor. They keep inventing "revolutionary new ways" of doing the exact same thing that could be done in a dozen ways already.
I mean, I think there are serious rose-tinted spectacles going on here. We don't tend to think as much of the horrors of the past as the javascript-y horrors of the present, but they were there. The worst excesses of OLE/ActiveX (1990-early noughties) spring to mind....
> In the past IT people, whether we're talking about programmers or something else, where very clever people. People with a high level of intelligence that took serious pride in doing things in a meaningful and pragmatic way.
Uh, no, IT people were idiots decades ago too. I was one of those idiots. I've watched myself become less of an idiot, over a very long period of time, because I can see my old self in other people today. But even that is a fallacy - I'm not actually less of an idiot, I just see my old idiocy and assume because I've seen it that I'm smarter now.
I find the "impostor syndrome" meme pretty funny. Tech people seem to get impostor syndrome when their egos develop a crack and they see their own lack of understanding, and worry somebody else will see it too. But then a tech person with a stronger ego convinces them that it's all fine, because actually we're all either idiots or geniuses and nobody can tell the difference.
The tech industry is basically at the same level of advancement as people who built small buildings in the medieval period. Large enough that you need an experienced craftsman to put it together, but small enough that they're not using geometry or doing the math necessary to safely build large structures. The idiocy will continue until society forces this industry to be a real regulated engineering discipline.
If they're passionate and argue well, but are willing to disagree and commit, then they are useful. I've been places where nobody pushes back and asks, "why are we creating nano-services for everything?" "Why is it a good thing to put two NICs and the Internet between every little thing in our system?" When no one is willing to point out that the emperor has no clothes and we're here to deliver features, not endless rewrites, that is a path to doom.
> well, perhaps with the only exception that now a 2 year old baby can make something shiny that you can click on with your mouse.
Rudeness aside, this shouldn't be dismissed. The same designers you already have on staff who make your web sites look nice can make your desktop apps look nice using the skills they already have. Things like React Native mean that the engineers who make your web app can make your mobile apps (on both major platforms at once!), again using mostly the same skill set.
Yes, this leads to lower quality software! Electron apps use tons of memory and have huge binaries, React Native apps take longer to launch, etc. But the cost savings are hard to dismiss, especially for startups trying to get to market ASAP. And most importantly, the market doesn't seem to be punishing companies using these lower-performing technologies at all, so why would you not use them?
As someone who works with stuff the author hates (I work with Angular, NativeScript, and reverse proxies on a daily basis) I agree that it's all horribly inefficient and I'd love to to it the "right" way. I just love the time savings more.
> The same designers you already have on staff who make your web sites look nice can make your desktop apps look nice using the skills they already have.
Wrong way around. The opposite, yes, but not this way.
Uh... if Electron really let a 2-year-old baby (or, hell, even an eight-year-old) write software as widely used as appreciated as Slack, Signal (Desktop), or VSCode... I'd say that whoever wrote Electron should actually get a Turing Award, because that would be society-shaking!
I mean, no joke, democratizing software production is huge, and important, and hugely beneficial -- when more and more of our lives involve software, being able to produce it is hugely empowering. I don't know how much Electron really does that (we know it doesn't really let 2-year-old babies produce software), but that the author would find it of no benefit whatsoever if it did, or even something to make fun of or disparage...
When in other discussions people are like "the reason there aren't good no-code development tools is because software engineers don't want there to be, they are threatened by them, they keep them from us!" I'm usually like, nah, that's not what's going on... but it turns out they're literally talking about THIS GUY!
So anyway, I stopped reading there. Near the beginning.
As others have pointed out, its a strawman post, and the arguments being made in favour of what he is against aren't what he is saying they are.
However, I thought the more interesting to address would be
> There is something seriously wrong with the IT industry...
I think you'll find the same case whenever you get close enough to any industry, but don't agree with the general consensus. From afar, most industries, including IT, seem to be well organised and know what they're doing. Its only when you get into the weeds with them you see the chaos surrounding it.
Case in point, music. I've been playing around with synthesizers for the past few years as something a bit different from programming. But as I've got closer, I've come to see that simple problems often have inane solutions, where things that you thought one box could do on its own have to be handled by 10, all plugged into each other with different cables and in a different order.
So yeh, I'd say this is just a natural consequence of being in the industry. Talking to friends in completely different industries, I don't think this is unique to IT at all
It's funny that Electron and web devs are blamed for "increasingly more abstraction". The thing is: You can't really build cross-platform native apps.
Maybe stop blaming people who use tools that enable this and start blaming Apple, Microsoft and the Linux community for being incapable of providing one native UI kit with one native language that truly enables cross-platform development.
As someone with a background in philosophy of language, the idea that i can write a program in python instead of C is basically my barrier for entry.
I learned first-order logic, and I don't have a degree in CS.
Does that make me a bad coder? Maybe. I can still apply my knowledge of logic to solve a lot of problems that someone who spent years learning C instead of studying mathematical logic can't.
> The entry barrier to programming needs to high! Programming is engineering, it's not something where you throw stuff at the wall and see what sticks and just assume that programming languages, browsers and operating systems are made of magical dust.
Reading this, made me wish the entry barrier to writing was higher. I mean you should definitely have a copy editor and proof reader to make sure you are not leaving out words (such as “be” in the first sentence). Back when writing first started, you had to get your own clay and cut your own cuneiform stylus and carefully craft your characters and then bake your tablets. Now, any random person on the Internet can write.
/s
Actually this aspect of programming is what makes it so exciting. There are no gatekeepers. You can throw stuff up against the wall and see what sticks. You are basically limited only by your imagination, creativity, and time in what you build. This low barrier of entry to programming is to be celebrated and we should be working to lower it even further.
I'm not normally one to fuss much over typos, but I similarly found it hard to take the author seriously when he makes a typo in the very same sentence that he attempts to extol the superior intelligence of IT professionals of yore.
> In the past IT people, whether we're talking about programmers or something else, where very clever people.
If you're going to talk down to people and insult their intelligence, please at least have the good sense to spell "were" correctly.
> in the very same sentence that he attempts to extol the superior intelligence of IT professionals of yore.
Whenever someone tries to express this sort of of thing, I always have to wonder whether these superior professional of yore are pioneers of the industry back in days when commercial commuting was just starting to bud, but it’s always some guys from the mid-late 90s who seem much angrier.
It's because most programming efforts are built on top of a predetermined stack. If that stack isn't a great fit for whatever reason, then your only alternative is to make another layer that is a better fit, even if that same level of abstraction exists lower in the stack.
Of course, you don't want to do that for every project, so there gets to be one common open source layer to serve that purpose. So now lots of people like the new layer and use it, and then it becomes prescribed for other developers. Then it's not a perfect fit for all those developers, and they start writing another layer to change it again. GOTO 10.
Another thing that happens is if you are used to working at layer N, then layer N-2 is really scary low level stuff. It's much more comfortable to write layer N+1 rather than step down to N-2. Even if N-2 is actually a lot simpler if you just lose the fear and try it.
I don’t care about the memory my programs gobble up because we’re not in the fucking 80s where we have to fight for every kilobyte.
I don’t care about building abstractions on top of abstractions because we’re not building code to last centuries, we’re just trying to last for about a year or two before we do a completely different solution.
You might as well ask why modern architects build thin crappy glass buildings instead of solid stone structures that last for a millennia. Simple: we have better engineering knowledge of materials and techniques so that we can build things to last just long enough for when we want to replace them, and with the least amount of effort and material required while still being within safety tolerances. We do not need to overbuild. You overbuild only when you need something to last a very long undefined amount of time or you don’t know what the fuck you’re doing so you play it safe.
The need for talented developers has vastly outpaced the supply. The bar has to be lowered to include a wider pool of people. There is currently no other solution for businesses.
The unfortunate side effect is that capable developers aren’t usually interested in solving simple problems inefficiently. So you won’t find them applying to your jobs.
> Programming is engineering; it's not something where you throw stuff at the wall and see what sticks and just assume that programming languages, browsers, and operating systems are made of magical dust.
This is the mistake, web/desktop app programming is exactly a place where you throw stuff at the wall and see what sticks. It does not matter how great your app is if no one is using it. Conversely if your app is successful you can always pay someone a ludicrous amount of money to fix it, software is nice like that.
Databases and operating systems are a completely different beast. In those domains things get so complex that proper engineering is essential and in those areas you also tend to find "better" engineers. But web and desktop applications is like building tool-sheds, engineering rigor is irrelevant. It matters when you build sky scrapers though.
To those that agree with the author: it's quite disheartening to beginners (often learning JS as a first language) to hear these things. It's just purposeless gatekeeping that borders on anonymous hazing. Who in good conscience can assert:
>The entry barrier to programming needs to high!
as a blanket statement. It's an absurd assertion that seems to presume all programming does or should take place at the same level of mission-criticality/resource efficiency/engineering elegance.
To get a bit meta: I remember ~7 years ago when I was starting my career, these kinds of uncharitable, gatekeepy screeds would mostly just elicit nods of agreement from the HN masses (while making me wonder if I could ever be a "real programmer", severely exacerbating my imposter syndrome). I'm glad the web tech Overton window has shifted somewhat and there is more push-back now.
The author is very on-point about reinventing the same things over and over again, sometimes with even less functionality than their predecessors, and also consuming more resources while doing it.
Seeing how far the demoscene has taken the C64 and ZX, I wonder if we'd see some sort of resurgence in skill if there was some sort of mandate on maximum hardware requirements, e.g. something like a 1GHz Pentium III with 256MB of RAM. That would've been a very comfortable machine in the early 2000s to do things like text messaging and audio/video calls, and in fact I did. Yet, having experienced the Horror of Microsoft Teams (https://news.ycombinator.com/item?id=20678938), on a machine far more powerful, the decline is definitely real.
Asking a few questions in response would be meaningful:
Do you ever write scripts to solve routine tasks so you don't have to do them over and over again?
Of course you do. Most ( not all ) of these examples are attempting to abstract a problem. In some cases they do that well in other cases they don't.
Have you upgraded your OS in the last 20 years?
Again, of course you have. Each of those upgrades have some useful and some not so useful things.
Did you upgrade your vehicle to a Tesla?
I can't answer this one but the same applies here. If you want to state "well yes because it helps the environment." My response would be simply a large number of auto advancements over the last 20-30 years makes the vast majority of that a reality for you.
So no, you will not stop attempts at progress. Some will win and others will become Studebakers but this is the activity that drives innovation.
Always seems odd that the cranky dinosaur doesn't use his/her unix super powers to gather likeminded talent to build the world's greatest, simplest, cheapest app and retire a billionaire. It's almost like there is more to the decision making process than that. /s
When you solve a problem, you deprive someone of one they can manage / extract value from, and so they invent new ones to manage. Problem solvers aren't actually that clever, they're more like beasts of burnden or working donkeys (asses) who have become wise to how they are being managed, but this doesn't change the fact that they are still asses. Most frameworks are new ways to favourably manage solved problems, and pointing out this fact without understanding it's on purpose is what a smart ass would do. (I am very much a smart ass.)
When it clicks that most people are miserable because it works for them and most problems are trivial but for it being someone's job to manage it and ensure it's never solved, you can probably find some peace.
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry?
everyone wants to prove themselves by having s gh project that is used by a lot of people. So solutions start searching for problems...
It would be really nice if software developers that rant about X instead come up with a better alternative to X. You don’t like Electron? Great! Come up with a better alternative that still gives you the multi-platform/cost advantages of Electron. You don’t like Node? Great! Show all of us what a better solution would be that still gives us all the advantages of Node. It is super easy, but also super lazy, to ignore the advantages of X and only focus on the disadvantages of X. There is a reason why X became popular! An alternative needs to give the same benefits while fixing the problems. If nobody can come up with a solution for that then perhaps X is a pretty good technology.
I believe we, as an industry, are driving towards frameworks that can be operated with story-like analogies describing what needs to be done with no technology knowledge at all. The core computer scientists creating such a framework create it with entry level non-developers as the "end-users" in mind. Such a framework is the holy grail of software development because it will enable any Joe with any software idea to hire anyones to make "their dream". The "no code" movement is an early manifestation of this trend. BTW, the VFX world is ahead in this tread, creating production frameworks requiring no 3D graphics technical knowledge at all...
This works great in many scenarios. But when the time comes to complete a task that the lossy abstraction that is the framework doesn't facilitate, Joe hits a wall very fast. He then needs to either unravel the abstraction, find another one or say it can't be done.
This is where the hold grail really pays off: those Any Joes that are deeply financially committed must hire real Computer Scientists to implement their non-standard needs. Of course, those real Computer Scientists work for the framework provider Corporation, and their expense is high enough that Any Joe is forced to give up equity for their non-standard feature. It is a operational maze designed to financially and mentally exhaust the new entrepreneur so they can be assumed and taken over by the status quo of existing deep pockets. Unless you've been asleep, this is exactly how SAP grew and continues to expand.
This is all done for the sake of higher speed of development. Building a native app in Qt will take a lot more time compared to an Electron equivalent, same goes for React Native vs Objective C mobile app.
I get to think the reason is competition. Your Qt app may run 4x faster taking 3x less memory, but i will build my Electron app first, eat the market, racing to the exit before yours gets released (and yes this also explains why nearly every software product we use is such a trash - the ones that aren't trash never got past MVP because trashier ones were out of the door first, and got funded).
This is a problem but i can't see any solutions to it. It is the nature of industry itself.
As I see it, the cause isn't industry-specific, it's just incredibly apparent because of how fast our industry repeats the cycle of:
- Make thing
- Thing becomes ubiquitous and widely used
- Thing has warts and limitations
- Make new thing on top of old thing, because old thing can't be reworked without major consequences because of its ubiquity. Repeat.
If you've worked in any job with reasonably complex processes, there's likely a few procedures that no one really understands, but when people try to change them or build on them, they only break things or get more confusing.
Not sure what can be done about it, but I have a hard time blaming anyone or anything.
So basically hardware improved 1000 times, but out of that improvement, corrupt programmers took 999 for themselves, to make their lives easier. The end user got almost no benefits. Where is my Minority Report UI?
Perhaps not corrupt programmers, and hardware improved much much more than 1000 times, depending on your reference.
It is a valid question though: "Did we spend the speedup the hardware gave us wisely?" It's reasonable enough, in my mind, that we spend some of the speed to great higher level language, so that we could solve problems faster. Besides that, in all honesty, did we spend the remaining speed all that well?
Most of the problem we solve are still data input, some calculations and an output. Some times we skip the calculation bit, and just retrieve our data a little later, other times we skip that part too, and just store stuff.
With the exception of video and gaming... what have we actually build in the last 15 years that couldn't be done before. Sure, we can scale to more users, but we use more servers that ever to do so. For the average office and home usage, we wasted so much speed, and it's not clear where it went.
The chat applications are the best example, my Google Chat eats 700MB of RAM, yet the only added a "Create video call" button and inline images, as compared to IRC or other platforms from 2000 and earlier.
Don't get me wrong, there are stuff like neuroimaging, better weather reports, climate models, simulations, CAD, all that good stuff, but that's not helping the average computer user who just wants to check Facebook and read spam.
I’m starting to wonder if everyone who makes these arguments has even tried making a native app across Android, iOS, Windows, Mac and Linux. It’s actually challenging to do it with Electron/React Native, and as much as I love/loved Qt, all of the effort of making things work and look nice, packaging for each platform, then coming up with a separate strategy for mobile because Qt is not good on mobile is not ideal.
I don’t disagree that Electron apps kind of suck, but it has practical answers to problems that Qt, for example, doesn’t. There’s nothing like Electron Builder…
1. Companies use Electron because they don't want to spend money on the desktop. After all, desktop users are declining.
2. You'll only get much worse, or not even usable softwares if you use the same small budgets to build native apps on multiple platforms.
Building an excellent Electron alternative that is not web-based is not in corporations' interest. Hackers are all ranting about Electron, but the only way to defeat Electron is to get together and build a high-quality open-source alternative because not many will pay for it, or it would be just rants forever.
> It's so bad that I haven't managed to find a single industry with the same massive amount of stupidity, with the exception of perhaps the fashion industry
Perhaps a bit too much hyperbole, but lots of industries have these cycles of increasing complexity. The necessity of such complexity is in the eye of the beholder.
I mean System / Low Level / Embedded engineering doesn't suffer ( as much ) from this. And I am not sure if the author is aware, all of his complain are about Web Development. And I dont disagree. Abstraction on top of another abstraction.
We should be making things simpler. But instead we have continue to add complexity. And most of the time needlessly. Since the industry gravitate towards solutions to gain economy of scale. There is no turning back, or separate tools that only fits Google instead of 90% of the market.
I am going to ask and duck. What is a seriously good cross platform (yes also web) [meaning windows, Linux, Mac, Android, ios, web] alternative to electron?
I used haxe 2/3 (openfl), javafx, qt. No they are not even close.
Next thing what is bad about vscode? I really like this application as a prime example of perfect cross platform compatibility. It is also electron based. Just because 90% is shit (because it is fast, easy to use and most important to deploy) does not mean it's the libraries fault, right?
Funny that you mention this. I run it perfectly on my raspi 3 using code-server on home assistent. Now please tell me, what is a better IDE then vscode for a raspi 3? It barely uses any ram on the server side. Wouldn't it have been written in electron the code-server project would not be possible.
Using a raspi not as a headless device when programming is a stretch. So the only IDEs that come to my mind are vim, emacs, eclipse Che, c9 and vscode. Would use vscode from this list, again not even close.
Now expanding this further, before code-server I had to use vnc, nomachine, x2go, ssh, mosh or xrdp to use an IDE on a remote server. None of these solutions are lightweight or stable across an unstable connection besides console based IDEs.
Now I can finally write software as if the IDE would run on my rather old laptop while executing code on a very powerful server with minimal setup cost and barely any ram usage (compared to a full blown IDE).
I am saying that it would not be possible to put it in the browser if it would not be web based.
It is a perfect fit for headless devices. It also runs fine when you put a desktop on the pi but this is certainly not a use case I would prefer.
It also runs superb on chrome books. You can literally put it anywhere.
Please let's just be done with this discussion, if you can awnser my initial question. What ui library would you prefer over electron which is as cross platform compatible? I would gladly switch if there is a lightweight alternative.
There are times when complex frameworks are necessary because they've already dealt with the complexity and times when they're overkill or stupidly inefficient.
The point being made in other comments about minimizing dev time vs maximizing reach / impact is really solid (re: Electron).
My real takeaway from the rant (which I found pretty amusing) was that the author isn't terribly effective at discussing these issues with others and feels like they just get labeled as 'Old' when trying to do so.
> In the past IT people, whether we're talking about programmers or something else, were very clever people. People with a high level of intelligence that took serious pride in doing things in a meaningful and pragmatic way.
Yeah, I wouldn't be so sure about that. Lot of self taught who blew up things big time and are deeply familiar with the stack that the company they have been working at for 20 years relies on but would be lost in another company.
Completely agree with the sentiment of the author my question is why is this happening? I see a lot of comments here mentioning things like groupthink or specialization as the reasons, but how can really smart people suffer from groupthink or how can generalist engineers not afraid to write sass/graphql/js all in the same line of their pure React component want to specialize?
I hope this thread doesn't turn into a shit show, but instead sheds some light on the real reasons.
I have the opposite feeling that because of the huge growth in the number of developers new great software tools are being released so frequently that it's hard to keep up, but also the competition and the number of options to choose from is incredible. And actually, this has been possible thanks to the tools the author blames. I am speaking from my rather conservative point of view, however it's hard to not appreciate that.
I think over time there were more use cases for everything. People still make sites with pure html and css (myself included) but that is not always the best way to do it. For example, take a complex admin control panel thing like AWS, it would be a nightmare to replicate that without a front-end framework. A really complex business logic made with a bunch of .php files? You would end up with a code base that is a mess and hard to maintain.
90% of developers come from jQuery and WordPress world. Back then, they were using an existing framework and trying to reinvent the wheel on top of it.
These days, with changes in market, they are on the bandwagon of Next, Nuxt etc.
There are only 10% of fluent JavaScript engineers who know ins and out of the language and can understand and challenge various design patterns. Rest are just making six figure serving HTML, CSS hidden behind ultra complex frontend architecture.
> has no value over a native desktop application what so ever - well, perhaps with the only exception that now a 2 year old baby can make something shiny that you can click on with your mouse.
This is huge when it comes to time to market and leveraging existing code and skills. Who are the people who are even learning native desktop anything anymore? It is hard enough to find developers. Now go find people who know/are willing to learn that.
Well, if someone would just put in all the vast the work to make a good, cross-platform, multithreaded, graphics-card native, GUI toolkit for desktop, this would all go away.
Or, you can pack up an entire browser, a full Javascript runtime and interpreter, and use something that both programmers and users know that kinda, sorta works.
Sure, "worse is winning". But "better" hasn't even stepped up to the starting line.
It is surprising how the comment thread degenerates into a supposed market need, ignoring the issue.
Nothing new has been invented for the web since the JSF specification, all popular frameworks are copies of it, or its components. More than 10 years have passed and copies of this continue to be created.
And beyond the web:
Do we really need another Linux distro with deliberately obfuscated usage settings?
Does every application really need its own package manager with a new syntax?
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry?
because you can jumpstart without knowing anything about the problem at hand. And appear smart. And do tech-talks about it. And not dealing with the problem at hand saves you from failure. It's just not adult.
Known to all who rather wiped the windows than starting the thesis.
I'd love to go a layer deeper it's just that it's so so hard for me to grok it all and deal with platform complexities or even to get a job as easy in a spot where lower level stuff is used rather than e.g. electron. (Unless you've already spent decades grokking it.) Maybe if the lower levels were simplified (we could use a revolution on the lower levels to be honest).
It is called fashion driven development, everyone looking for the latest fad in the hope to be the next keynote guest speaker at Conference XYZ, have the blog posts spotted on the technology news forum of the day, sell consultancy services, whatever.
Just like with some silly fashion decisions, the only cure is to wait that it gets taken by a new fashion wave that hopefully is less silly.
Problem 1: is that all native platform owners built they own empires around their proprietary APIs as their competitive advantage. It's easier to do this, especially early, to crash to competition. Now the things have matured and various platform have extremely similar capabilities. Not the same, and not the same perf, but like 90% are overlapping.
Problem 2: Distributing things via web is orders of magnitude easier than anything else. No need to fly half of the world to install stuff on client machines, or send him a CD to do it themselves. No need to wait for the lords from Apple, Google or Microsoft to kindly verify your app (which takes a week). You can release on the web as much as you want. 100 deploys per day? Sure why not, if your CI can handle it.
Problem 3: People are reluctant to install stuff, or can't (not much space on device, no admin etc.). Web just works, for anyone (assuming moderately up-to-date OS).
Problem 4: Be generous, imagine you have infinite resources and money is not an issue. Now you need to build the same bloody thing for web, Windows, Mac, Android, iOS...
Fine, Alice built a Windows version and Bob built a Mac version. But hey, there's a small discrepancy in algorithm in the two. And the labels are different in the two, oops. Shit, translations in German are different too! Yadda yadda yadda, been there done that. And Windows team has higher velocity than Windows, what we do now? We want to have feature parity at all times.
It's not impossible, but the bigger the company, those overheads on every little thing become insane. Maintaining two+ enormous codebases in different techs to be in sync visually, algorithmically, localization, release process-wise... Good luck.
And hiring experience native devs is ultra hard and expensive. Hiring tens/hundreds of them, even more. I'm not surprised big companies get away from native; especially since web version are de facto mandatory for all kinds of software those days.
Now, iOS is different because Apple puts a lot of restrictions around what is allowed and about monetization. Plus the quality bar is pretty high. And iOS has some capabilities that are difficult to obtain from web browser technologies. Tough call, iOS native that is.
The blog post addresses none of those issues. I wonder if the OP ever worked on a team who had to ship the same product across 3+ tech stacks.
"it's not something where you throw stuff at the wall and see what sticks and just assume that programming languages, browsers, and operating systems are made of magical dust"
Actually, it is - and he just described evolution. And for the record, I don't like it, but what can I do. Would like to go to the cathedral, but I'm stuck in the bazaar.
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry?
This probably is because the industry wants backwards compatibility. It almost seems a miracle we have finally switched to Python 3 and stopped importing Unicode support "from the future".
This things exist to solve "modern" problems, like the "inconvenience" of waiting 0.5s for your page to load. So you have to add a placeholder thing. But for that to be smooth, you need to have an SPA, and SSR. So you end up with a whole new approach, just to not see a blank page/appearing text for 0.3s.
>Programming is engineering, it's not something where you throw stuff at the wall and see what sticks and just assume that programming languages, browsers and operating systems are made of magical dust.
News to me. I've built an entire software career around throwing stuff at the wall and seeing what sticks.
This rant reads like some old dude got left behind and is now yelling at the clouds; this is what gets on HN front page nowadays? It's clear he hasn't researched WHY these newer technologies and tools exist if he believes it is because we are just adding "unnecessary complexity."
> In the past IT people, whether we're talking about programmers or something else, where very clever people.
I've been in and around this industry for a long time and that's not exactly the way I remember it. (To clarify, of course there were clever people, but also a whole lot of otherwise).
Ultimately I kind of agree with the sentiment but the sensationalism here is cringe. Also, some new tools and languages are indeed worthy improvements. It's worth calling out the ones that aren't; lashing out at all new technology as a whole is not how you should come at this issue.
I consider it an evolution. We blindly iterating over many possible variants, we try to mix languages, yes, we throw things on a wall and see what sticks. Some things do stick, other things will be replaced. It's a chaos, but it's getting better.
Will random tech people with blogs ever going to stop submitting clickbait-titled blog posts?
The actual subject of the rant:
> Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry?
"Why in the world has this idiotic trend of abstracting everything away by layers upon layers of complexity gained such a strong foothold in the industry? Has everyone except the "unix greybeards" gone mad!?"
Well, I remember that I had to set up PHP and mess with php.ini, get nginx installed, php-fpm, get them talking and then I could install NextCloud which would complain a bunch about my nginx settings and time outs and url mappings etc.
Now that I have some "layers upon layers of complexity", I just add some completely understandable lines of "code" to a docker-compose.yaml file and boom. There I have it. I want to update? No more downloading, moving the old folder, extracting, then running occ, no, I just `docker-compose pull`, `docker-compose up -d`.
OP can rant al he/she wants because things aren't the way they used to anymore, but to me it just sounds kind of childish and tantrum-like. You can still do things the old way of you want, make a desktop native desktop app, make as many as you like. I also prefer them, but I can see the advantage of using the same tech for the web as for the desktop from the dev's point of view, so this is what we get. And I think a lot of complexity exists so that we can have infra as code, and that code is nicer if it is simple.
"All web servers have a built-in router. Whether it's NGINX, Apache, lighttpd, Caddy or something else. But no. Let's not use that; let's add yet another router on top of that with a single entry point and then basically re-write every single request before it gets served."
But have you tried Traefik? Just add some lines to your compose file and all your stuff has https, you want some basic auth? Add some lines. And we can put it all in git. It's much better imho than using every project's fav webserver and settings to do things. I cry when I think about the stuff I would have to configure without the layers of abstractions.
React native, one codebase, one team one language. Native ios and/or android app, two code base, two teams, two different languages. I don't know, you tell me. Unless you are government funded, what is more cost effective?
I think most systems today do more with less complexity than let us say 10 years ago. I worked on Java systems that have 12-15 layers in the stack. Good luck finding that kind of complexity in a modern system.
It's what the company prioritizes. If they can get away with spending less money on development and think the end-users will put up with whatever they made then that is why Electron succeeds.
Madness will only end when developers stop being bitches and start asserting your points to customers who want their web pages to look like triple-a game demos.
If the old way of doing things is really so much better than "modern webdev", then where are all the high-profile sites written using nginx routing and raw PHP?
Where is that xkcd about real men use a magnifying glass and butterfly effect to program.
Abstraction is indeed getting heavy but I don’t think it is a problem. Witness the recent enthusiasm for static sites and serverless. Sure that has its own complexities but the desire for clean minimal design is still there in some quarters without reverting to full html only
This entire post is peak boomer-meme. It's a rant because "new is bad." Heaven forbid they just stick with what they like and let everyone else churn. Their loves in native and PHP aren't going anywhere. Weird to see a "didn't have my coffee" moment make the front page.
Fair. Just realize that it'll happen to you (in 15 years you'll be raging, "goddammit, why does every UI require a 2 GB Roblox runtime") and go easy on the poor boomers. Circle of life.
Oh I'm already there. I'm 43 and have seen the "everything old is new again" cycle at least twice already. I'm just old school and don't need to air all of my gripes - I just let the youngs make mistakes we've already learned from.
I hole-hardedly agree, but allow me to play doubles advocate here for a moment. For all intensive purposes I think you are wrong. In an age where false morals are a diamond dozen, true virtues are a blessing in the skies...
First a fall, in per portion to the RAM used by Electron, my C script uses basically none. These new programmers are just not four meal your with how to code properly. Should we create better education info structure for new developers, or just hall of cost everyone who codes in Electron? I'm leaning towards the ladder.
The one who feels personally attacked is the article's writer IMO. From this article and others from his website you can see he clearly has a problem with new developers, especially those from JavaScript. A lot of resentment can be read between lines.
In general HN tends to express resentment towards web developers, especially those who make use of JavaScript frameworks. You can find people ranting about web development on this site at least once a week. I mainly chalk it up to those people having no fucking clue what they're talking about because they don't have the relevant experience, like a physicist talking about politics. They are smart and knowledgeable about things I will never understand, but they don't know anything about this and should probably shut up instead.
Whether they are right or not, complaining about the modern web development practices on this website is a waste of keystrokes, because no one is listening.
The argument for Electron and React Native isn't "it's modern", it's "it's much cheaper". Hiring experienced desktop application devs to build a quality native app for each platform is going to be expensive, hiring a few JS bootcampers to build one react UI that works on every platform is extremely cheap - shittier performance is the tradeoff to instantly have access to every platform. It's not a coincidence that Electron apps like e.g. Slack, Spotify, Discord are massively dominant players in their markets, I doubt you'd look the engineering leads of these companies in the face and tell them that you believe they put no thought into the tradeoffs of Electron and that they're just following trends.