Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Things that used to be hard and are now easy (jvns.ca)
371 points by pingiun on Feb 20, 2022 | hide | past | favorite | 305 comments


I would also add something along the lines of "building big webapps". Large frontend codebases used to be scary Lovecraftian horror where things were only touched out of utmost necessity. Runtime errors for weird corner cases. Dangling dependencies that no one could figure out whether they're safe to update/remove or not. Refactoring was both an art and an arcane incantation at the same time. Modern tooling and best practices have made most if not all of these problems nearly non-existent, and I'm grateful for it.


Oh man, this brings back nightmares of upgrading jQuery, and third party components that were copy/pasted into the codebase and modified/customized. Large frontend codebases are indeed miles ahead of where they used to be.


The step backwards is that before, you couldn’t upgrade jQuery because the CDN script tag was on thirteen different partials, and you weren’t sure which were still in use, and now you can’t upgrade React because it depends on v4.3.2 of chalk-babel-duster-pack-plugin and that’s insecure but it has a breaking change in v5 you don’t know how to fix, so you’re just waiting for a total rewrite of the app to pull out CRA and replace it with Vite or some such.


There's some kind of law of conservation of complexity. Often we don't solve it, we just move it around or repackage it.


It's called the Waterbed Theory.

https://en.m.wikipedia.org/wiki/Waterbed_theory


Or you can avoid CRA in the first place. CRA is frankly a hot mess. But React itself, and for the most part even Webpack are quite reasonable.


What are your issues with CRA?


> Things that used to be hard and are now easy:

> Concurrency, with async/await (in several languages)

The number of times I've been bitten by the very specific semantics around async/await in JavaScript makes me wonder if I've hit some kind of a ceiling in trying to grok it.

Coming from the world of threads, I've found Kotlin's structured concurrency approach the best of the lot I've used so far, even if it isn't with out its quirks: https://kotlinlang.org/docs/composing-suspending-functions.h...


Oh yes. I hate async/await so much. If I could uninvent one thing in programming it would be async/await. I wrote async/await code in C# for about a year. I did really try to learn it. I invested a lot of time. I did read Stephen Cleary. It often made sense for a split second only to fade away immediately. Usually the software would work, but i didn't understand what was going on in my own code.

And others don't understand it either. Neither my code, nor the async/await concept. Stack overflow is filled with accepted answers that are just doing it wrong. This was part of the frustration. I did know enough to see that proposed idea was wrong, but didn't have a better idea. Most programmers don't even seem to understand that there is a difference between a C# Task<T> and a C++ Future<T>. I do. And it doesn't help one bit.

My lack of understanding is not for lack of trying. I'm interested in different paradigms in programming. I'm eager to learn new things. While I'm programming OOP on the job, i learned Haskell [1], Idris and Prolog on my spare time. I mean, i can actually programm in these. I think esp. learning prolog shows I have some flexibility in thinking when it comes to programming. I would consider myself an above average programmer, compared to my colleges.

But async/await is simply above my mental capacity. After 1 miserable year the of desperation with in C#, I quit my job and started a new C++ job in a new city. Life is good again. When this company adds async/await to their C++ (not yet in the language thankfully) I will quit again.

I can't even describe what i don't understand about this concept, so far out is it. I only remember the purely practical anecdote that i could make a WPF command execute async but it was well-nigh impossible to make canexecute async. While completely absurd, the actually problems where way more conceptual.

[1] Kind of ironic that everything in Haskell in async, i know. But it never got in the way there.


I think async/await is kind of a didactic mistake. It’s marketed as a way to not have to think about the underlying abstraction, but lack of the understanding about how it works under the hood will almost certainly bite you in this ass. This I find to be especially true when people are writing mixed async and sync code. The best way for me to learn was writing all of the code before the async/await sugar existed (scala in my case).


I think the intent is that you don't have any sync code at all.


Yeah but in practice it doesn’t work like that. There’s a ton of sync code that already exists and people know how to write sync code.


I have a pretty good mental model of it in JS and TS, because it builds on the pre-existing Promises API, because the transpilation process let me see the equivalent vanilla code while I was learning it, and because the event loop plus the single threaded nature of JS meant that libraries were already using asynchronous code everywhere.

When it comes to other implementations, though, I find them much harder to work with. In particular, existing synchronous or threaded code can block them, there's no good way to fix it without rewriting the synchronous code or shoving execution into a traditional thread, and all the existing libraries are synchronous. It feels like you need a completely new standard library to take advantage.


I've never had any trouble with async await in JS and Python. It's pretty close to being just like threading if everything were under one giant GIL.

It's just "Stop here and wait till the task finishes" and "Only one thing happens at a time and you don't go till someone stops".

If you quit rather than got fired, it seems like you must have learned async well enough to do your job.

I almost wonder if C/C++ et al aren't teaching people bad habits of thinking about how things work behind the scenes instead of just just trusting the language.

Haskell probably has the same effect, but it is also said to have some benefits too. It teaches people to look for underlying mathematical patterns in everything.

The thing with modern programming is it seems to be designed assuming you're thinking in terms of the language and libraries you are using at the moment, and building your architecture the way the language designers intended.

As soon as you try to actually understand how it works, or you try to get creative and do anything outside their opininated boxes, it gets ugly, otherwise it's usually fine.


I hope someone can reply to this and help me learn more about async/await in c#. I'm finding it confusing too, and I can't put my finger on why.


The only person that I trust to understand async/await is Stephen Cleary. He has a blog and written a book. Maybe try that. Official MSFT documentation became hand-waving or non-existent very quickly, back when i tried. Back then plenty other blog/tutorial/SO anwser on the internet has had fatal flaws such as making one thread per task or even mixing up the terms.


If you don't know what a thread is, start there. Otherwise, you might read this.

https://blog.stephencleary.com/2013/11/there-is-no-thread.ht...


You can't add async programming to a blocking language like Java or C#. It doesn't mix well, and will fragment the ecosystem with libraries that mustn't be used together.


> The number of times I've been bitten by the very specific semantics around async/await in JavaScript makes me wonder if I've hit some kind of a ceiling in trying to grok it.

Got any examples? I've mucked around a fair bit with JS's async and never felt like it got in the way. (Other than the whole function coloring ordeal)


> I've mucked around a fair bit with JS's async and never felt like it got in the way.

When functions return more than one Promise, or when one chains async functions calls, it isn't really clear Errors from which functions are likely to break out and result in an unhandled exception (read also: https://archive.is/ULT7P).

For example, does trapAllErrs() below trap 'em all? Not really. Why? The answer involves understanding the microtask queue and its place in the event loop.

   // traps all errors from a1 and a2, or so we hope
   async function trapAllErrs() {
     try {
       const p = Promise.all([a1(), a2()])
       const results = await p
     } catch (err) {}
   }

   // sync code in a1 never throws
   async function a1() {
     // do something sync
     // ...
     // a3 and a4 may throw errs
     return Promise.any([a3(), a4()])
   }

   // sync code in a2 may throw
   async function a2() {
     // do something sync
     // ...
     // a5 never throws
     const x = await a5()
     // ...
     return ans
   }


I don't understand why this is complex? We know that Promise.all completes on the first error. That's literally the most basic knowledge of Promise.all (it either completes when all promises resolve, or rejects on the first error).

You only need to understand basic JS to know why this doesn't work. You don't need to know anything about the microtask queue or event loop.

Or am I missing something?


Would you do a .catch on the await p to fix that?


The solution to catch the errors for each function would be to catch errors from each promise individually (possibly within their functions, or within wrapper functions). This solution would also clearly state the intent. Performing concurrent tasks and only catching the errors at a higher level is a tricky thing to define behavior against; .Net/C# does it with the `AggregateException`, but even that comes with its own difficulties (which error comes from which promise? which came first? did one cause the other?).


Why not use Promise.allSettled...?


Yes, the message was you could write asynchronous code as if you were writing synchronous code but that really isn't the case. It is easier but it isn't a panacea.


As someone who leans frontend, I find firebase and serverless (somebody else’s server) solutions really powerful for prototyping an idea or building a proof of concept. You can get something in front of your users tomorrow. I would second guess SaaS startups that start by building the infra/data models unless they’re going for a highly technical play (aerospace, hardware, etc) or already have a ton of market knowledge.

I’m in the middle of a mongoDB to Postgres migration for a product with 0 users. When you’re trying to scale from 0 to 1 user then everything should help you explore the user’s problem space, even if it seems backwards to build the UI before the backend.


> As someone who leans frontend

> I would second guess SaaS startups that start by building the infra/data models unless they’re going for a highly technical play

Do you not see the problem with these 2 statements. Of course not dealing with non JavaScript is easier for a JavaScript developer.

> You can get something in front of your users tomorrow.

This is something we’ve been able to do with Rails or Django for nearly 2 decades now. I can do something similar with more interactivity with Phoenix LiveView and Postgres these days.


The logical conclusion would be that you need more frontend than backend when you're trying to scale from 0 to 1.


The OP's premise is that you need more frontend. No comment has any information supporting it, and both are disagreeing on it.

If what you take as conclusion is the premise, you have just been a victim of confirmation bias.


In my experience, if you're not using Heroku, getting a Rails/Django app launched requires at least a day or two dedicated to devops faffing about (creating server instances, setting up CDNs, making sure the database is minimally correct, having some kind of deploy script or something). This won't even get you nice to have things like multiple environments with CI and deploy previews. Switching over to serverless can reduce that to less than one day and get you deploy previews. Is that a good tradeoff? It just depends on the app, and how long lived it will be, what the performance requirements are, the costs of hosting, etc. For my needs, it's often easiest to just use Netlify's AWS Lambda adaptors.


I’m not arguing about setting up your own infrastructure. Heroku launched when rails was 2 years old. What’s the point of this comparison if you arbitrarily handicap the most common rails deployment environment?


Agree with starting focusing on UI first. But haven’t seen a speed advantage at this stage of serverless vs a simple monolithic backend app with a bunch of endpoints. You can get something in front of users within hours. Either way allows you to overengineer and focus on the wrong thing.


Agreed, and the upside to to having a monolithic backend is that if/when your app gets serious you don't have to port it or redo it. Sure there are definitely large apps that use Firebase et al, but much more often than not for reasons of cost or compliance or customization, the backend ends up getting written later anyway and APIs usually need to change.


Agreed, I can get something up and running using Express or FastAPI (frontend + backend) easily in a few hours, whereas Firebase would be new to me. Use whatever you know/can to get something in front of the user.


I don't necessarily disagree with this, but I think the difference isn't between frontend and backend, it's between strongly typed and weakly typed, or structured and unstructured. The strength and weakness of Firebase and MongoDB is that they let you shove unstructured data into the db without defining a schema.

Once you have any users at all, you need to migrate that data every time your data model changes. When you need to migrate data, not having a schema goes from a blessing to a curse, because you're trying to change the structure of unstructured data.

For this reason, I'm skeptical of both sides of the coin. I'd almost rather start with an ephemeral in-memory store that forces you to rewrite before you have actual users, rather than be tempted by MongoDB or Firebase.


Scaling from 0 to 1 user? Is this a personal project?


Nope


+1 on Firebase. I recently helped a buddy build an app and decided to give it a try. Holy crap. All we had to was build the app! significantly faster than either of us estimated.


Be careful with this, especially with mobile apps that can't be easily updated. Using Firebase has a tendency to put what would otherwise be backend logic into the frontend, which can make it much harder to make changes quickly once your app has users.


I would recommend learning backend (which is much easier than frontend!). If you have basic backend skills then you can do the frontend and the backend just as easily, and it gives you a lot more flexibility going forwards.


What is there even to migrate??


Maybe you meant from 1 user to more than 1? I mean, unless is building dust on a shelf, anything running in a computer has at least 1 user.


I would love to see the reverse of this 'things that used to be easy and are now hard'

That would be an interesting read


* Running any website with user-contributed content. Spam is just everywhere nowadays, and you cannot publish a simple hiscore list for your game anymore without some kind of anti-spam measures, otherwise the spammers find it astonishingly fast and flood it with spam, even if it doesn't allow any clickable links

* Thinking about it some more, everything related to security. Back in the days, the whole Internet wasn't port-scanned for known vulnerable services multiple times a day, and you could get much further with less refined security practices.

* Drawing things on screen. Back in the days of DOS and QBasic, it was just a single line of code to put the screen into a different mode and start to paint lines and rectangles on it, placing text at certain coordinates. Now you have to have a GUI (though canvas elements in web pages make it a bit simpler again; still not quite as simple as in the 80s/90s)

* Getting access to a particular memory location, without layers and layers of virtualization in between


Maybe OpenGL?

Nowadays to draw a triangle you have to write, compile and attach two shaders written in it's own separate language that interface with the main code


> Thinking about it some more, everything related to security.

wasn't it more like "nobody gave a f back in the days"? :P

>Back in the days, the whole Internet wasn't port-scanned for known vulnerable services multiple times a day, and you could get much further with less refined security practices.

what years are you talking about? seems like a lot of decades ago


I'm not sure about drawing things on screen, actually. There are still tools similar to QBasic: Processing and Python's turtle module, in particular.


I think browser security has made a lot of stuff harder. E.g. CORS has made loading JSON off a remote server harder, chrome now won't load mic/webcam feeds off non-HTTPS.

(Not saying these are bad things! Just annoyances I've run into during development.)


Running your own email server. Receiving email is easier since mail routing and spam filtering has matured. But sending email is now hard with authenticating yourself everywhere and making sure you are not blocked by the big providers.

I'd also say that even with letsencrypt it is harder to setup a web server than 10 years ago when you didn't need https at all.


Writing programs that most users could run. It used to be a matter of firing up Delphi, moving some widgets around in a form, writing event handler code, and compiling. The install builder had it all done in less than an hour. It would run on almost any Windows PC in the world.

Now everything has to do Unicode, be piped through the internet, work on MacOs, Linux, Windows, iOS, Android and the web.


I think you can argue this one either way. For example, I would say that writing a little html+js calculator web app these days is very easy and works on billions of devices around the world (approx anything with a web browser and an internet connection). I'd say this is reaching more users more easily than your delphi example.


Lazarus works on Linux, Windows, Mac and even Win 9x.


- registering internet resources (e.g. IPv4 block or desirable domain), due to scarcity

- various features that are harder to implement due to changing user/regulator expectations (from simple websites to stuff like GDPR)

- multimedia in the browser post-Flash, pre-WASM


- sending emails (spam filters)

- publishing apps (app store / google play rules)


Search.

In the past, Google just works. Nowadays, Google must be used with uBlacklist or something like that to yield decent results.


Back then you had a few programming languages. Now you have hundreds to make a decision.

Back then you had a few tech stacks. Now you have thousands to choose from.


Logging in.


Creating and publishing animated/interactive stuff (aka Flash.)


Submit and post a link to submission here. I'll vote it up.


There's another facet I've watched with interest:

>> Building videogames, with Roblox / Unity

This has resulted in something like 90 new games entering Steam every day. While some games may still be art, they're increasingly difficult to find among the noise. I've heard the mobile game market is in worse shape[1] by the deluge of games. "More games for everyone!" sounds great - but note that definitely doesn't mean more "good" games - or more "innovative" games all the while encouraging rampant copying/stealing of ideas.

[1] https://www.youtube.com/watch?v=Q30qZSEnI9Q


Is there a law for "when a thing becomes easier and/or cheaper to do, more of that thing is created, and the mean quality of that thing goes down"? Of course, there are positive effects as well — desktop publishing was responsible for countless design crimes, but it also unlocked some real innovation during print's last stand.


The mean quality goes down, but the total number of games above a certain threshold goes up, which makes discoverability a bigger problem than it used to be, but, assuming we can solve that problem, makes the gaming industry better.


Jevon’s Paradox


That's a super interesting take in that the normal assumption that "there's more games" is a supply-side statement.

But Jevon's would apply a step before that. The "demand" is really the people wanting to write games. When they do that, they then create "supply" in the video game market.

Per the parent comment however, Jevon's doesn't address the mean "quality" of games decreasing.

Similar things happened to published writing (prior to the Internet, published writing typically involved editors and professional writers) or Photography prior to digital cameras. That is - technology made something far easier to create so, it was indeed more commonly created and the mean quality (at least in the artistic sense) went down.


Part of the problem is that it's not necessarily quality going down, but the fact that quality itself is often defined by the amplitude of the shared experience. If thousands of people share the same art, then it's considered of great quality - even if it was just, say, a few white kids coopting some black music. When supply volume makes it fundamentally very hard to build such a shared experience, then it becomes difficult to recognise greatness in a (relatively) objective way.

It's the same for all art, as you say, starting from the figurative ones - the world has probably produced more professional painters in the last 150 years than in the whole history of the world, but among modern and contemporary works, almost nothing can really hold a candle to Michelangelo. Because there used to be one Sistine Chapel in the world, and now we're drowning in imagery every minute of our lives.


This almost feels like some parallel of the Marxist claim that proper capitalist competition should asymptotically reduce margins per unit sold to zero but for creative industries.


The other part has no relation to Jevon's paradox either. Stating that as creating a game becomes "cheaper" people do more of it is the basic behavior of the demand.

Jevon's paradox is about people spending even more on the thing than before the thing got cheaper. If you state "as creating a game started to take less time, people spent more time doing it", that would be a form of Jevon's paradox. But just that "as it started to take less time, people did it more" is not.


wikipedia : In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons' effect) occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand.

https://en.wikipedia.org/wiki/Jevons_paradox


I would call it just "demand elasticity". Jevons paradox is a subset of demand elasticity where there happens to be some relatively fixed 'resource' like oil. There's no 'resource' equivalent for something like video gaming; it's just demand for writing games is elastic and when a game gets easier to build, people demand more supply, which comes into existence then.


I actually think it's more like "supply elasticity." It gets easier/cheaper to create something and you get more of it--possibly at lower quality (as is generally the case here because there aren't any shortcuts to having good game mechanics).

Demand may or may not go up but, in any case, it's probably harder for the user to find the gems among all the clones and drek (and harder for the gems to gain attention).


... not J Evan's?


Writing GUI apps for Apple products.

Too many variables to mention, but I wrote my first Apple application in 1986. It took several weeks, and wasn't much to look at.

These days, I can spin out a full-fat, shippable app, in a couple of hours. I do that all the time, for my test harnesses.


I used to do this on Windows in the 90s with Delphi: this is only really remarkable because we lost all the good tools somewhere along the way.


Languages that are good at arbitrarily composing many modules into a stand-alone executable are becoming more popular. Rust is my favorite; one can even package wrappers over C or C++ libraries in a way that fits seamlessly into the cargo build process. Go is good here as well, as long as you stick to pure Go. As soon as you introduce cgo, that adds a hurdle for any user of the library, particularly on Windows. Zig looks like it's on the right track, though we still have to wait and see what the package manager will look like. And I'm sure there are others.

We also don't know yet if any of these languages will develop strong ecosystems for rapid GUI development.

And of course, Lazarus (for Free Pascal) is still a thing, though it doesn't have hype, or more importantly, large mindshare.


I wrote plenty of GUI applications for IBM PCs in the '80s. It wasn't difficult with Turbo Pascal.


Visual Basic 6.0 chiming in from the 90s.


When Turbo Pascal came to the Mac, it made life MUCH easier. I started with MPW.

What was always difficult, was the toolkit. Apple had MacApp (but not when I started). In fact, the original Adobe Photoshop was a Pascal/MacApp (1.0b9) app.


I would extend this to GUI apps for every system. Even GTK apps on Linux have gotten there.



Packaging is still a nightmare if you want to ship closed source binaries on Linux.

Too many options (flatpak, apt, rpm, appimage, snap). No simple "please turn this directory tree in to a package" options. Dependency hell due to lack of binary compatibility (and no simple way of putting DLLs in the app directory like on Windows or in the dpkg like on macOS).


>> These days, I can spin out a full-fat, shippable app, in a couple of hours.

How do you do that? What's your approach?


I write native MVC, using Swift, UIKit/AppKit/WatchKit, and storyboards (I'll get around to SwiftUI, but it's not a hurry).

If you do that, the template is basically a functional app, and you really just need to tweak the storyboard, and supply any ViewControllers. Most of the time spent, is in Adobe Illustrator, generating graphic assets.

Most of my test harnesses are simple, 1-screen apps (with a couple of significant variations). I did, however, use the test harness apps for my BlueThoth library[0], as the starting points for the various Blue Van Clef apps[1] (Bluetooth browser apps for all Apple platforms -yes, you can use your Apple Watch or AppleTV to sniff BLE).

I also have a lot of published modules[2] that help mitigate a lot of the “niggles,” involved in app development. That probably helps a ton.

To be fair, if I am actually shipping an app, as opposed to just using it as a test harness, I spend a lot more time on it, than a couple of hours. I've been working on the app I'm currently developing, for over a year and a half.

[0] https://riftvalleysoftware.com/work/open-source-projects/#bl...

[1] https://apps.apple.com/us/developer/rift-valley-software-inc...

[2] https://riftvalleysoftware.com/work/open-source-projects/


Regarding chocolate quality discussions: when Hersey started up, they couldn't get any European chocolatiers to share their process, so they had to invent one. They used lipolysis, which creates butyric acid, the chemical that gives vomit its distinctive smell and aftertaste.

You can definitely taste it if you pay attention, like "yep, that's it, it's that vomit aftertaste."

https://www.google.com/search?q=hershey+vomit


Interesting article that suggests the context in which you smell the same scent is almost entirely determinant of your response to it:

Moreover, motivational responses were entirely different as a function of label. For example, when isovaleric + butyric acid was called “parmesan cheese” it inspired participants to say they would like to eat it, while when it was given the negative label (“vomit”) it provoked the wish to escape from it. The effect was so strong for certain odors, that participants could not believe that the same odorant had been presented to them at both sessions

https://web.archive.org/web/20090203043112/http://www.senseo...


Yeah I've noticed this too. Everyone says Hershey's tastes like vomit but you never see anyone calling Parmesan cheese "vomit cheese".


“Everything” that is easy used to be hard, depending on your time scale.

I’m certain that COBOL programmers were raving about the ease with which you could program the computer back in the day.

The items on this list sort of happen within a career so an individual had to experience how hard it was before. It probably reads different to someone coding for 5 years vs 30 years.


There is probably a "Law" to the effect of "The complexity of the task increases to the limits of the current tooling."


Parkinson's law: "Work expands to fill available time".


I would also agree to this in MANY fronts, not just software. As a kid I dreamt of building a robot at ~6 years old. Just this past weekend, I helped my God-son build one (a solar kit for kids) and it was on sale for $10.

Things are generally more affordable now than they used to be for the masses, and more easier to achieve and prototype, in less turn around time, with 1 day shipping, or the mall-ification of society.

Cool stuff really.


The flip side is that because we treat cheap things as disposable, we are accelerating the collapse of the biosphere.


I've only been using desktop Linux for 15 years, but it is vastly easier to use it as a daily driver than it used to be. It still has its problems--just yesterday I sunk an hour into an elusive bug. But when I was doing that, I remembered that such things used to be nearly constant and aren't any longer.


As an i3 user, basically going without a DE, I recently tried to breathe new life into an old core 2 duo laptop that was in very good shape and basically just used to browse the web and play spider solitaire. It was on Windows 7, and I don't think upgrading to 10 would have been worth it with that CPU and 2gb of RAM.

I picked Debian, and put the different DEs through its paces. I started with those that should have been more lightweight, but ultimately found that only kde of all things gave an acceptable user experience while not feeling overly slow.

I don't remember which was which, I tried mate, cinnamon, lxqt And maybe something else I'm forgetting now. One of them was really slow to react to the multimedia keys, and then would appear to buffer the keystrokes from the key repeat like it's the MS-DOS days, ie you press brightness up, nothing happens, so you hold the key, and when it finally starts adjusting the brightness you immediately let go of the key but it still goes on and on to maximum brightness. Then another one couldn't be convinced to behave in a consistent way when it comes to screen brightness wrt (un)plugging mains power and using the brightness keys. It was something like: unplug, screen goes brighter as I manually adjusted it way down before on mains power (sitting in dark room), and the default setting for battery happened to be higher. Annoying but ok, but then if you touch the brightness controls again it first goes up again and then very low and whatnot. Like, there might be some very clever intentions behind this that I didn't catch right there, but then I doubt the non-tech-savby person this was intended for would get what kind of 4D chess magic was going on there. It just felt like it was spazzing around. Then the third one installed and had all its icons missing and the font rendered way too large. That's just the three most blatant issues I ran into. Installed KDE last and everything was sane and usable, to my surprise.

Granted, some of the issues, especially the icon/font one are probably just packaging mistakes on debians side; I don't know how often people really install multiple DEs in parallel, so this might be untested territory, but still, showed me once again that as much as I like Linux, I'm never gonna claim it's the year of the Linux desktop for probably another decade, and only would encourage curious tinkery friends to actually try Linux as a daily driver if they do more than just browse the web, and then probably never use anything besides KDE or Gnome 3 at least for a start.


KDE is amazing, as in I am constantly amazed by how good it is.

Don't forget to donate to free software projects that make your life better!


I agree. I've used Mint since Daryna and since version 19 it has not worked on my older, but still quite good laptops. A similar issue with having to choose between dual monitors or having brightness buttons working. It's a shame, since it has always "just worked" for me, unless I was doing something complicated. And this is for a very beginner friendly distro.


You should try xfce.


I think that was among them now that you mention it.


I find it harder to understand internally though. Maybe I just have not kept up, but it seems that since SystemD was added, the additional complexity is significant.


IE support for JS/CSS has been massively simplified by not supporting IE any more.


It’s honestly freeing. I was so happy when the last holdout among my clients told me they no longer need to support any version of IE. Removing all of the IE polyfills and babel transformations from package.json felt so good.


I browsed the Washington Post and NYTimes with IE11 the other day and was pleased to see they looked like absolute garbage. :-)


It's very hard to accept the statement that writing fast programs is now simpler with Go/Rust. I think that what happened is more like this:

* With Go is now possible to write reasonably fast (not as fast as C) programs in a simpler way. That's great indeed, but is not exactly the above statement.

* With Rust it is possible to write programs that have a speed that is comparable to the one of C programs, that are memory safe. So Rust made memory safety simpler, but writing a fast program in C is often simpler compared to writing a fast program in Rust, if we ignore memory safety. Overall Rust made programming harder: something that I'll hardly excuse to it.

EDIT: The blog post was modified, so it only lists "Go" in the section about fast programs. My comment refers to the version of the blog post I read.


> writing a fast program in C is often simpler compared to writing a fast program in Rust

I think it really depends on what you need to do. If you need a HashMap or a BTree, getting your hands on high-performance implementations of those structures in Rust is much easier than in C. Also "sprinkling some threads" on existing serial code can be much easier in Rust, using a library like Rayon. But I take your point that designing new data structures and doing unsafe things with pointers can be more complicated.

> Overall Rust made programming harder

This is certainly a matter of opinion, but I like to argue that writing large, correct systems software was always this hard. Rust frontloads a lot of the difficulty, forcing you to handle every error condition, lifetime mismatch, and potential data race before it will even consent to run your tests. When you're first learning Rust, or when you're writing toy code where safety doesn't matter to you, this can feel like an unnecessary burden. But I think we've learned from experience that writing large systems software without this discipline leads to an endless stream of memory corruption and concurrency bugs.


I would agree. I feel like we keep seeing these “software has become more complicated!” blogs. But it’s really just that modern languages/stacks don’t focus on making software development easier for the individual, they focus on making it easier for the organization.

Related anecdote: I have been using GraphQL at work and holy shit it is boilerplate galore but I swear we hardly have to train new hires on how to structure their software to look like ours.


On top of that, our security standards have also gotten higher. No one was worried about corrupt spreadsheets causing segfaults in 1995. But today that corrupt spreadsheet is just as likely to be a malicious file from some attacker on the internet, deliberately crafted to exploit the buffer overrun behind that segfault. Any piece of software that might conceivably touch attacker-controlled input needs to be hardened against this sort of thing.

We're also going to be seeing more and more pressure to make things multithreaded. So so much of the old software was designed back when concurrency just wasn't in the picture. That's why PID reuse races are so nasty in child process management. And why anything that touches the environment (which lots of standard libc functions do under the covers: https://rustsec.org/advisories/RUSTSEC-2020-0159) is unsound in multithreaded programs. Making everything threadsafe is genuinely hard.


I suspect it's pretty hard to make the case that the average Rust program is harder to write than the equivalent C program. I suspect it is quite the opposite. The language provides so many tools that didn't exist in the age of C -- high level concepts like Iterators, destructuring match statements, deriving debug printing for free, etc.

But both of our opinions are just opinions.


Here is the initial version before it was modified.

https://web.archive.org/web/20220220150536if_/https://jvns.c...

"Building fast programs, with Go/Rust"


> Overall Rust made programming harder: something that I'll hardly excuse to it.

Rust isn't much harder than any other language once you understand lifetimes. Non lexical lifetimes made it easier than ever. It's a month of dealing with speed bumps as you learn, and then you're good to go.

Rust makes it insanely easy to write multithreaded applications. I write servers on a daily basis with shared in-memory caches, worker thread pools, async jobs, and all kinds of fun and useful things that I wouldn't dare do in any other language.

I'm developing apps in Rust at near Rails speed.


Would you care to share your library stack for that?

I've sometimes dreamt of trying playing around with building a rust-based web site/app, but then I spend that afternoon reading about different web frameworks, db access libraries, etc. and never really got started.

Given you seem to have a beaten path here I'd love to know what it looks like :)


I think you're point is correct around rust in that writing rust is harder than writing C due to the constraints the compiler places on what you can do. So far I'm happy with this tradeoff because I've had some very gnarly C concurrency bugs that took weeks to understand and debug. If the rustc compiler can point out that I'm doing something stupid and save me two weeks for a bit of pain immediately I'm all about it.


Reading the article and I didn't see writing fast Rust programs as easier listed(only Go). The callout was for sharing data between threads which I think is an area where Rust has the edge over other languages.


Because the article was modified.

[Note] Another HN user mentioned that the statement was dubious, cited the original statement, got downvoted to hell: https://news.ycombinator.com/item?id=30406644


They got downvoted not because of their disagreement, but because their comment was low quality. "Sure, bud." is a mere quip and doesn't add anything substantial to the discussion.


Good point.


Why do you feel that way?


> Overall Rust made programming harder: something that I'll hardly excuse to it.

Shouldn't this also be:

Overall Rust made programming harder if we ignore memory safety: something that I'll hardly excuse to it.


Would be interested in also seeing a "things that used to be easy but are now hard list". Feels like basic stuff is harder now:

* Privacy online

* Google search for niche or exact matches

* Browsing without captchas and blockers, especially if you don't adhere to big tech's rules for the internet

* Sending emails without worrying about spam filters being way too aggressive

* SEO before it got hacked to hell

* Barrier to entry for creatives to build things on the web (for all the flaws flash had, it bridged the gap)

* Finding real discussion and information. The growth of bots posting, misinformation, blogspam, and toxic echo chamber partisan discussion online (maybe it was always like this, i.e, eternal September?)

* MacBooks being easier to upgrade before when things weren't soldered on

* In general anti right to repair hardware

* Not having to worry about dongles, i.e. connecting USB devices to your machine

* Ransomware wasn't a thing. Nothing like a non-technical user needing to learn BTC or XMR to maybe get their data back

* Being able to buy software once vs a subscription, e.g., photoshop

* Work was easier to disambiguate from life before chat apps like Slack made employees always reachable

* CGNAT not being as common

* OAuth is still annoying to implement

* Blocking ads was easier

* Getting a job was easier for a lot of people prior to whiteboarding, leetcode tests, assignments, multi-day interviews and trials, etc. becoming more common

* GDPR making infra+dev work harder because big tech companies don't understand consent and couldn't behave, also leading to performative/annoying cookie banners we see everywhere

* Needing an internet connection for software+tools that used to be able to run locally on your OS

* Website responsiveness was easier before all the devices and monitors with different viewports

* Archival (or scraping) was easier before the majority of human generated content was placed behind login walls (fb) and rendered content needing headless browsers

Edit: Call me cynical if you want—yeah some of those programmer things listed are easier now, but improvements on the core regressions I listed would help programmers (and non-programmers!) all across the world and in developing countries more than some niche highly paid SV engineers using terraform to orchestrate infra on GCP.


I agree with all of those. More things that are harder now:

* Finding honest product impressions/reviews.

* When looking for a configuration setting or tip to use any piece of software, find a one minute article with clear bullet points instead of an 11 minute YouTube video with a personal backstory and a VPN sponsorship.

* Finding a cooking recipe that isn’t padded with SEO nonsense.

* When finding an online discussion or blog article in a search result, being able to read it straight in the (mobile) browser without hostile prompts to make an account and/or download an app.

* (Europeans only.) Publish a simple website without worrying that the hosting provider you choose doesn’t perfectly conform to privacy laws that even lawyers won’t explain to you and you’re personally liable for ridiculous damages.

* Run software developed by hobby programmers on machines you own.

* Connect devices like speakers, keyboards and mice to computers. (Edit: You mention that.)

And, completely disagreeing with the article on this one point, this is also harder:

* Writing cross-platform UIs. As someone with a lot of hate for Java from back in the day, this is painful to write, but: Java Swing might not have produced the nicest GUIs, but the stack was so much simpler than Electron+Node+Express+Webpack+React+TypeScript (substitute Svelte, Vite or whatever you want), it’s not even funny.


"cross platform" isn't well defined these days, since it may or maynot include mobile and/or browser platforms. That said, Qt is much better to use than Swing ever was, and really no more complex until you want to do things you could not do in Swing anyway. Gtkmm too, from my personal (21+ years) experience.


> * (Europeans only.) Publish a simple website without worrying that the hosting provider you choose doesn’t perfectly conform to privacy laws that even lawyers won’t explain to you and you’re personally liable for ridiculous damages.

I'm sorry but that's just FUD. GDPR fines are only for serious infractions with intent, most violators get off with a warning. Furthermore, access logs are OK because they're a legitimate purpose, as long as there's rotation.


Oh boy you have no idea how horrible it is. Check out the latest rulings in that regard. Google Fonts is a big no now. Everything that "transmits PII (including your IP address)" is problematic. Its just a matter of time until all those free static site hosters (github/gitlab pages, netlify, etc) are targeted, cloudflare probably too. Our legal council already told us to "at least add some notification that people are leaving the site" if they click on the social login buttons ... because it could be considered transmission of the IP because of the redirect.


Ain't it Google specific?


Yes, I’m specifically in fear, uncertainty and doubt. To be clear, I’m fully onboard with the intent of the GDPR, but I have no idea how to follow it. And “you’ll be okay, the fines are probably small” is exactly the kind if statement that causes anxiety.

I’ve recently tried to figure out if it’s okay to use something like Cloudflare Pages to host a blog. That’s not an esoteric question, but I was unable to get an answer. They’re not based in Europe, the log retention policy is a bit unclear, regulators seem to be disagreeing on whether even their CDN is okay (and Pages is built on top of the CDN). And in Germany, the real danger isn’t a regulator fining you, it’s a private lawyer sending you an “Abmahnung”. Which they will do for any trivial infraction, often using automated platforms.


Are you located in Germany? If so - welcome to the boat.

From experience I personally would not host on CF pages. But, depending on the content and intended audience CF pages could clearly be argued under legitimate interest. You would/should probably have a TIA in place and state that in the privacy part of the site.

I know people arguing legitimate interest when using Webflow. Also clearly PD being transmitted towards a non EU country there.

As said. Personally I would go with something like uberspace.de as I just like their offering and the fact that I always have a human replying and being helpful when something breaks because I fatfingered it.


Yeah, Cloudflare Pages is out, same as Github/Gitlab Pages, Netlify and all the other free offerings Hosted by a US Company.

Social Login Buttons are probably a problem too, if they automatically redirect. Could be considered a transmission of IP.

Game Servers that connect to Steam on client connect are probably a problem too.

I like the idea behind GDPR but the implementation is a fucking shitshow.


Agreed. To add to that I would say that publishing a website (if you yourself don't do tracking, advertising, affiliate and all that) is in no way impacted. A simple web site without cookies and not transferring user data elsewhere but the hosting webserver is not impacted at all.

At least when hosting with an European host.

One needs an imprint and data privacy page. But both is easy (except in specific cases like for MDs that need to add just a little bit of information in the imprint).

I support small businesses with their web activities. It did not get overly complicated through GDPR.


> One needs an imprint and data privacy page.

Really? I have a small personal website with no tracking; I guess my hosting provider collects server logs, but do I really need a privacy page for that?

An imprint is something I've only seen on German websites (I recall reading somewhere that those are required by law there).


In Germany every site that collects, stores and/or processes personal data (PD) is required to have a data privacy explanation that is easily reachable from every page and explains to the user what data is being received, stored, processed and also explain the user's rights (for example the right to be informed, to correct data and so on).

At first glance, a "private" page does not directly process personal data and would therefore not require a privacy policy.

So if you don't use contact forms, advertising banners, social media plugins, etc., you should be on the safe side.

What most don't see: The server on which pages are located (hosted) collects personal data in the background in the form of server log files. These log files contain IP addresses, these addresses are personal data.

So yes. Even if the server doesn't log the IP to logfiles as it still receives the IP every site needs a privacy page.

There are good privacy page generators, though. Free of cost.


What if I run something like a good old phpbb just as a hobby. What would I put on that privacy page? You enter your email address on sign-up. It's visible in your profile. You get notifications to it if you subscribe to topics. Isn't it kinda obvious it is stored? What if phpbb logs IP addresses for every post? I know it did in the past and don't remember there being an option to have that purged after a given amount of time. Can I do that? Do I need to put an imprint on the page, or is a contact form enough? Maybe I don't wanna put my name and address openly on the web. So I guess I'm just way to afraid to host a forum in Germany for my local sports club, because I don't wanna get hassled by lawyers who made this their business model. Better open a Facebook group, this is certainly much more privacy friendly.


> In Germany

Well, I'm not in Germany... you didn't mention in your original comment that it only applies to German websites!


Germany is the only country I can speak of from experience. Not sure about the other EU countries, but would expect similar regulations.

Everybody else. Officially if you target European audiences you would need to adhere to the same regulations, but practically nobody could reasonably enforce it.


Well, as I mentioned, I've only ever seen an imprint on websites from Germany, so I think you're conflating EU and national regulations.


Recipes: finding a recipe that doesn't start with 1000 words on why this dish makes the author cry because it reminds them of their dearly departed Nana, followed by anecdotes about Nana.


Logging into an online account on a computer you don't normally use when you don't have your phone. This can range from "had to jump through a bunch of extra hoops" to impossible if you haven't saved backup codes ahead of time (Google).

My wife lost her phone when we were on vacation and it was literally impossible for us to login into her Google account in order to use find my Android.

We were trying to login on a relative's PC to use find my phone, but it was a catch 22 because we need a phone to login to Google on a relative's PC.

I pity the fool who loses their phone in the age of 2FA.


I try to be disciplined about having critical information printed out, especially when traveling internationally. I expect more so than most people. And I do still carry a separate (small) wallet. But we're pretty much all increasingly dependent on our phones for more and more things.

ADDED: And thank you for mentioning that. I just realized I never transferred my Google authenticator setting to the new phone I got last year.


> I pity the fool who loses their phone in the age of 2FA.

This keeps me awake at night. Especially as it is no longer possible to contact humans at a lot of sites to prove who you are. I went to jail for eight years, when I got out I could only access two of the thousands of accounts I had made over the previous 20 years.

> My wife lost her phone when we were on vacation and it was literally impossible for us to login into her Google account in order to use find my Android.

This is a really frustrating situation.


My phone screen suffered a bad accident, to the level of unusable. 4 weeks for a repair.

I needed 2FA to access company services. Email, chat, VPN etc. Which of course, was on my phone.

I emailed IT Support from my personal email to request a temp token. My ticket was closed, with a message saying "Email tickets are no longer accepted, use the service portal."

I needed 2FA to access the service portal.

Luckily I had a co-worker on a non-company chat service, so they opened a ticket on my behalf...


I agree. My SO is nowadays so frustrated by all the security hoops and pseudo-security hoops like Gmail regularly stopping to work with her Thunderbird (she regularly needs to setup a new app specific PW). Other spps/sites sending SMS, others forcing you to use their app for verification and not one standard to get used to.

I am used to this from work but even there MS pesters me to install MS authenticator while I clearly always just use my regular 2fa solution.

And because I think the state of 2fa is not user friendly my own solution is not very secure by using Authy with a client on every device. So there is no second factor. I just copy paste from the app on my computer and don't need to grab my phone.


I can't really see this as the negative you imply it to be. This seems like complaining that you might get locked out of your house if you lose your keys. There are certainly better and worse forms of 2FA, but I would like more pervasive (well-implemented) account security, not less.


The main key is the password, and we didn't lose the password. What we lost is the secondary, rarely-used-key that you only need when logging in on a new device or at new location. So if you don't leave home much, it's easy to forget that your online account access is totally screwed without your phone anywhere else. My wife's Google account doesn't require TOTP to login, so she never needs her phone when logging in at home, yet the phone is mandatory when logging in for the first time on a new device or at a new place. This creates catch-22 situations for people unaware of this behavior like it did for us ("we need to login to find the phone but we need the phone to login").

We learned our lesson, but I'm sure there are tons of other people that haven't learned this the hard way yet (but will eventually). I could see this becoming an easy way to sabotage/get revenge on people - steal their phone while they are on a business trip/vacation and then they are locked out of all their online accounts with no way in unless they had the foresight to keep physical copies of backup codes in their wallet/purse.


A better analogy would be the locksmith refusing to let you in because you can’t produce the deed, which itself is in the house.


That's why Google often prompt you for backup information - backup phone number, email address, which can be used to recover an account if you've lost your phone.


There are ways to provide backup factors without tying them to more personal information, however.


There's always a tradeoff though. With more secure backup factors, the more likely it is that you could end up locked out with no recourse.


That's why you always copy the secret recovery codes for the 2FA(which may not be helpful if you left your notebook in your home country during vacantions).


> * Privacy online

Privacy is a lost cause, the younger generations don't care.

I remember when people were warned about sharing their real information online. over 50% of people I knew were just nicknames on the internet. I had zero clue about their gender, where they lived or what they did for a living. Didn't affect our conversations the least bit.

Now everyone is plastering their faces everywhere they go. Every picture they take has their face next to it. Usually along with their name, the location and the exact time they were in there and who they were with.


>over 50% of people I knew were just nicknames on the internet.

Depends on the circles.

On the other hand, I accessed Usenet from a company computer which had my real name and company email. And the local forum on my BBS, most people used their real names and we even got together in person from time to time.


Really? On almost all Discord servers I've visited, everyone has a nickname and doesn't share photos of themselves.

On Instagram, I've also noticed a lot of young people who share e.g. art have a completely anonymous profile (and I guess probably another, non-anonymous profile for their friends only).

Not sure about Snapchat or TikTok, though.


The problem with Discord and anonymity is the fact that it's really tedious to have multiple Discord accounts.

Let's say you want to keep your work, hobby and private profiles completely separate. You'd need to have one with the official client and the other two with a separate browser each. Also you'd need to be super careful when joining a new server for any category to make doubleplussure you're not joining with the wrong profile.


IME Discord anonymity is more like ‘your friends know it’s you, but strangers can’t find out your real identity’. But I agree, they could do with a profile switcher.


I’d say the non-technically inclined groups of people do not care. Instead of privacy being dead there will always be a corner of the population that does not care. That is okay, let them be the fuel for the corpo machine.


> Not having to worry about dongles, i.e. connecting USB devices to your machine

The dongles. OMG. It's worse than the parallel printer age. I cannot recall another period in history where I needed a dongle so often. USB-C only on Macbooks was such a huge design mistake. Especially in the era of USB-A! Soooooo many things are still USB-A. All for a little more thinness! It's really grating.


USB C and the removal of headphone jacks are an example of how not everything in American society is "market driven" but can easily be imposed on the whole market and consumers by a single big player.


How is that? There are still plenty of devices with USB A/Headphone jack available. Apple devices are far from the "whole market".


Installing Windows without a Microsoft account


That's false. You can still install Windows without a Microsoft account. Granted, it's harder now to see "Use local account", but it's still there.


Your comment highlights exactly how the GP fits the “things that used to be easy but are now hard list”


That's false because it's true? I don't even...


How about win 11?



I suggest that "barrier to entry" also includes getting started programming in general. Too much to choose from, large leap in skill level required from text in a terminal to graphics on the screen, etc.


I question if that's really true given all the libraries for languages like Python, online tutorials/projects, and various types of low code environments. Sure, you need to figure out what your options are (or have someone show you) but I question whether it's really more difficult to do something than firing up the BASIC interpreter and typing in code from a magazine.


The worst part is when people somehow think all these things necessarily must follow from the things that are easier rather than suggesting we could have the good things without the worse off things.


The “old web” — the one before the rise of the trillion dollar engagement optimizing platforms — is probably completely foreign and unimaginable to those who haven’t experienced it.


I do wonder what they'll think of it when (if) they get to experience it.


Well, and before "the masses" really had access to it.


* Making sure you're not being spied on

Not sure if this was ever easier, but to me it seems it got harder. With all the possible attack vectors, ranging from chain of supply attacks hard-coded deep down on the cpus, up to malicious dependencies being replaced everyday on npm, how does anyone trust their machine?


The Flash authoring tools still exist, they’ve just been renamed Animate, and they export to HTML5.


I would argue getting a job is easier now. Prior to whiteboarding and take home assignments, if you didn’t go to a top tier target school and have excellent 3.5+ GPA it was really difficult to even get your foot in the door for an interview.


When was this? Maybe If you’re talking exclusively about FAANG.


I’m too young to know from my own experience but from what I’ve heard early 2000s.


I was around in the early 2000s, and you certainly didn’t need a 3.5 GPA from a top tier school for anything but a very small handful of companies.

In the early 2000s, you could get decent work just knowing HTML.


CGNAT will be here for the forseeable future and I wouldn't be surprised if ISPs end up deploying some version of CGNAT on IPv6, assuming they're somehow forced to use IPv6 in the future.


Why would they want to run CGNAT on ipv6?


I can think of couple of reasons for it

- they get money for selling static/public IPv4 addresses right now and they wouldn't want this money going away when IPv6 arrives

- can't imagine how the world will look like with shitty insecure routers everywhere and everyone having a public IPv6 address, ISPs might provide NAT connections to protect non-tech people

If I'm not mistaken, one of the largest ISPs in my country has begun to roll out IPv6 but it was either broken the last time I checked or it had some form of NAT on it.


* email delivery


Connecting your laptop to internet via an Ethernet cable


Makes me wonder how many sites are still useable with Lynx


Our online discourse is already saturated with cynicism and pessimism. Do we need to bring it to this thread as well?

Then again, given the contrarian bias of HN comments, I guess it's natural.


> Getting a job

Though negotiating is way easier with things like levels.fyi and blind.


The points made upthread would seem to apply to a very narrow slice of jobs. In general, finding info about companies, reaching out to people you have connections with, etc. seems easier than newspaper ads, form letters to HR departments, etc. Though then and now, personal networks are probably still the most effective path for many.


Until "large prime factorization" makes this list, I think we're cool.


Personally, I disagree that parsing was hard before and made easy with PEG parsing.

For decades it has been easy and common to implement parsers by hand.

From what I can tell, universities and textbooks just overcomplicated the process by teaching parser generators.

I have done a few studies of various open source ecosystems now and (edited: other than CPython) I haven't seen PEG parsers really used in anything but toy implementations of things.


It’s been a minute but parsing with decent performance is what caused all the parser generator stuff. It is a historically hard problem.

PEGs allow the generator input to more closely match the mental model programmers have, *LR require the language to be structured just a little differently than how most people think it.


Well if most production parsers use hand-written parsers (and aside from SQL databases, they mostly do) I don't think performance would really make sense to be the reason anyone chooses parser generators.

I'd be curious to read any posts/papers you've got that have benchmarks that show performance to be a reason.


CPython appears to use a PEG parser for its implementation: https://github.com/python/cpython/blob/b77158b4da449ec5b8f68...


That's true, but it's still quite rare to see.


Pretty sure that’s relatively new and had a hand written parser for most of their history.

…which also implies that it used to be hard and now isn’t.


No they moved from pgen, a custom parser generator, to this PEG parser.

The parser was never handwritten.


IMHO the barrier of entry really came down because with PEG-styled parser libs they put you back in the comfortable "declarative land" where you specify what you want using roughly the same mental model as RegEx without having to worry too much about quirks caused by implementation (i.e. leaky abstraction).


> IMHO the barrier of entry really came down because with PEG-styled parser libs they put you back in the comfortable "declarative land"

The whole idea of PEGs is that they’re specifically not declarative - they’re an imperative way to express a parser.


I recently used lark and liked it. To my surprise.


Rounded corners and gradients on the web. When I started both required background images, now they are just a css property.


That's right! I almost forgot about needing background images.


Fun list. The only nitpick I have is for Spark. I would not consider it easy at all. From my experience, team members that needs to come up to speed on Spark usually take about a month to do so.


Eh... I'd argue against quite a lot of those being "easier" than doing things manually.

Docker is a prime example. It's supposedly easier to spin up a docker container for a basic web server stack, but to be honest I'd rather install PHP/MySQL and an SSL certificate manually, rather than spend time finding a decent container from a trustworthy source, and then spend 20 minutes trying to figure out how and why it's been configured the way it has, etc. etc.

Plus there's the need to actually install docker itself and learn how to use it in the first place.

Every one of these comes with its own learning curve which isn't always better than just learning how the actual tools underneath work.


I highly disagree here. Within seconds I can have all the technology I want running in a Docker container with little to no setup. Just make a docker-compose.yml file with the services I want, say Postgres and Redis, mount my local folder to a Golang Docker image, and then I just do `docker compose up` and all three are running and communicating with each other with little to no effort.


Thats how you get people who have no idea what the underlying tools can do and bloat the high level abstraction.

Complexity has to go somewhere. If something is easier, something else is harder. These things may make the bog standard use case easy but woe to you if you ever try to go another path or change something. Then you need to figure out if the abstraction layer allows you to tweak the underlying software layer, or invent your own way of reaching into that layer which may break when any update comes in a higher layer etc.

Its more learning upfront to get familiar with the basic Linux tools and services, but it is very versatile and flexible and will stay mostly the same for decades.


I understand you manually manage the kernel and wrote your own TCP library? I sure hope you didn‘t just pass all of that complexity down!


There are well designed abstractions that are helpful. It's very hard to come up with them.

Mycrotoque aims at those who quickly jump to introduce new abstractions into their workflow before they understand what they are doing. It very often results in pain. It helps in the short term, because you are up and running faster but these shiny new frameworks are often immature and, like Potemkin villages, only look good from the correct angle, and have no meat. Once you try to use them for custom scenarios, you end up having to route around the abstraction layer because it wasn't designed so well.

To take an example outside of cloud management etc, in deep learning frameworks: there were competing libraries some years ago: caffe, theano, tensorflow etc. François Chollet decided to create an abstraction layer that could unify multiple low level frameworks. Called Keras. Did it work? No. Because the different implementations were different enough that they couldn't usefully be coerced under the same abstraction layer. I mean it worked for some time, to some extent. But very soon after, they dropped support for anything but Tensorflow. Creating good abstractions is very hard especially if the underlying layer is unstable and gets big conceptual changes. With kernels it's not the case. We've had them in similar Form for like half a century.


Interesting, thanks!


I don't think these points address the original post's points. Specifically they highlighted the issues of _finding_ images to run and having the _knowledge_ on how to use these tools in the first place for abstraction.


So once you did your own trusted setup - who is stopping you to create a docker container to your liking and use that onward?

I don't trust anything, hence all my docker containers are created personally and use them moving forward. At first it will seems like it's a never ending job to create another docker, but eventually you'll have all your needs fulfilled. Nowadays it can go a month without needing to create another one since I already did this A+B+C combination in the past.


I don't have any issues with containers, but it seems like many newer projects in the last five years or so the developers are nearly exclusively developing for containers.

My current impression is this violates the spirit of POSIX, because figuring out configurations without containers can become a non-trivial effort, especially for those of us who have not yet learned how to configure containers.


In .net, mandatory async/await for new API is also making things that were easy harder. Async introduces multithreading where there wasn’t, breaks call stacks (debugging harder), deadlocks in certain conditions but not others (asp or winform vs console), forces you to handle what happens if the user interacts with the UI before your function is done executing, etc.


This is definitely true. Just trying to download a file can become an async chain hell-hole.. it's asyncs all the way down!


The pain in the chain is mainly in the main().


> forces you to handle what happens if the user interacts with the UI before your function is done executing

Thank goodness for that. No more of UI suddenly freezing completely and stops accepting input just because internet connection is flaky. Async or other slow operations should not be running on the UI thread.


indeed, concurrency being 'easy', nice one. Concurrency is only 'easy' if there is no shared mutable state.


The first commercial game I ever designed was built on a custom made 3d render. All sorts of things you do in harware now had to be done in code and you could seriously push like 1k triangles at 20 fps. And that was really good! It took the smartest programmers I have ever known to get that to work on a 486.

Now you have Unity, Unreal and hardware acceleration.


Developing for mobile is so much easier overall with the Ionic framework.

It's not for every application but I would say most applications it's great for. My first startup we needed iOS, Android, and web teams. For my current startup, we have one frontend team using Ionic/Angular and we just deploy to all 3 places with the same codebase.


NLP with HuggingFace


I'd never heard of this, thank you! I was hoping for people to provide more examples of things that have gotten easier in this thread but I'm happy to see one or two at least :)


> Cross-compilation (Go and Rust ship with cross-compilation support out of the box)

That's not right? Go has builtin cross-compilation. Rust does not, you need to bring your own cross toolchain even after adding rust-std for the target with rustup, unless there are recent changes I'm not aware of.


It works for pure Rust code but once you bring in C, you need another tool chain too. Same as Go, as far as I know.

Zig has both beat right now, as you do not need anything else in either case.


Even if you don't bring in C, you need a cross linker and some libraries to link against. Rust doesn't come with those, except maybe for the mingw target.


I don't think of downloading the libraries as being a "cross toolchain", but that is true.

Common cross-compiled components default to lld, which is a cross linker.

That said, yes, there are some *s here depending on what exactly you're doing.


AFAICT, rust doesn't ship lld, though.


If you are using the various ARM targets or wasm, it'll also include the llvm-tools component of rustup, which does.


Oh, my bad, now that I think about it, all my Rust projects have -sys dependencies so I somehow assumed a cross toolchain is always required.


Yeah. C deps are culturally tolerated by Rustaceans more than Gohers, so it can feel like this even if it isn’t literally the case.


While I agree that many of these items are way easier than before, it still comes at a cost. Making a lot of these things easy has actually caused new problems that are once again, hard, mostly because of emergent complexity. For instance - configuring cloud infrastructure declaratively has made it very easy to stand up vast numbers of computational instances and distributed systems that now have to be reasoned about, monitored and managed effectively. CI/CD pipelines means we are able to deploy code way more quickly and efficiently and with less manual testing, likewise causing an explosion in complex business logic that's been deployed. If you're not careful, the complexity enabled by these developments can quickly overwhelm you and your team.


I find the serverless frontend to be way more tedious than a monolith approach.


Wow! Let me continue :)

- Text-to-speech and vice versa, using cloud APIs. - creating a solid objects from 3D models (3d printing) - messaging: pub/sub, message buses (kafka, redis, postgresql channels, rabbitmq etc) - advanced data structures are more available (sets/hashes/etc now in standard libraries of any language, and in many databases)

Things that surprisingly still a pain: - scanning documents preserving formatting (there are niche solutions but they are too expensive for a person who needs it a couple times a year)


Elaborate a little bit on what you mean by "preserving formatting"?


When you scan a document with tables, indentation and get exactly the same document in, say, docx format.

I know there are solutions from ABBY but to me they are too complex and probably expensive (checked the price several years ago).


One thing, I will like to add this. Learning new things and gaining mastery has become approachable too. Note, it is approachable, and as gaining mastery is intensely personal endeavor.

Learning new things via youtube, coursera, udemy, udacity, edx, many things. Mastery via project based learning, Github, tests in courses, certification exams, exercism, etc.


Perhaps a good follow-on thread would be new things that have solved multiple other things that use to be hard or impossible.

For example, Rust comes to mind as something that has improved error messages, elimination of most memory-related bugs and "fearless concurrency." All while being fast and having support for cross compilation.


The general theme of this list is that technology gives us more and more hoops to jump through, then provides some trampoline to jump through them. We're supposed to be eternally grateful. I'm not. I want to focus on problem-solving, not masterfully using trampolines.

> SSL certificates, with Let’s Encrypt

Semi-mandatory SSL certificates weren't a thing on the web until fairly recently. Not managing them at all was certainly easier than managing them with Let's Encrypt, which has a lot of gotchas, requires tooling, knowledge and can suddenly break your website if something is off.

> Concurrency, with async/await (in several languages)

async/await usage is literally the main reason why my recent project is written in Go instead of C#. I like C#. I don't like Go. But dealing with all the gotchas of .NET Core async/await APIs drove me away.

Moreover, Erlang had a much more sane and powerful parallelism model way before async/await.

> Centering in CSS, with flexbox/grid

Again, like with SSL, this is technically correct, but generally incorrect. Centering in CSS has gotten much easier with flexbox/grid. However, centering in HTML only became hard because tables were abandoned as the layout mechanism. Sure, they're not "semantic". But we're talking about what became easier, right? Not about what became more ideologically correct, semantic, etc.

> Configuring cloud infrastructure, with Terraform

Again, there used to be no Cloud infrastructure to configure. Personally, I find Terrafrom annoying more than anything else.

> Setting up a dev environment, with Docker

As opposed to doing what? Keeping my dev environment simple is a constant battle and most of the time Docker is the weapon used by the other side. In 00s my dev environment was a text editor and some upload tool.

I haven't used PHP in well over a decade, but for my latest personal project I've decided to brush up on PHP 8.1. Despite awful syntax, I can do stuff by writing a handful of lines of code in a text editor. It's very refreshing. 100% focus on the outcome rather than tooling.


I've been on teams that haven't used docker for dev envs and teams that do in the last 5~ years. The teams that used docker were significantly more productive since each dev env was a replica and forced the team to maintain a common env. The dev env drift without it is huge and causes so many "it works on my machine". I can see a one man show or maybe two not needing a docker dev env but any sizable team IMO it is absolutely required now. You're absolutely right that it is a hoop to jump through, but its a hoop I find very necessary.


I've worked on Rails, Node, and Clojure teams where Docker was optional and was always glad I avoided it. There has never been something important in my dev environment[1] that couldn't be responsibly managed by rbenv and bundler (or their equivalents for other languages).

[1] The one exception is SQL DBs and redis, but I have always been fine using brew, linuxbrew, or apt for them.


Oddly, my experience is a mixed bag here. Teams I've been on that went for rigid standardization with docker have largely been much harder to upgrade than those that allow local dev however the dev wants.

This is subtle. There are more bumps along the way. However, the task of upgrading isn't neglected forever. Instead, it usually seems that each new member winds up upgrading some small piece and making sure the code works with a small change in environment.


How did you handle differences in macOS/Windows and Linux? The last team I was on that tried to do all local development in Docker kept running into breakages. To allow the devs to use whatever editor or IDE they wanted, the source code was stored in the host system and mounted as a Docker volume. While that sounds simple, it's amazing how many times things broke depending on who set it up.

By default, Docker runs everything as root. This isn't a huge problem on macOS or Windows, where Docker runs in a VM and there's some sort of UID mapper mediating things. If a file is generated from a process within Docker (e.g., temporary or log files) and you're running in Linux, now you have files in your local system that you can't edit without "sudo". Fine, don't run the container as root. I'd have expected that to be a best practice, but it doesn't come up in Docker's best practices guide [0]. Adding our own user and switching to that isn't a huge hurdle, although this didn't seem to be an area that we'd have to chart our own path. Unfortunately, some 3rd party images don't run particularly well, if at all, if not run as root.

Once we got that running, we hit our next issue. Although things were working well for one developer, they broke for another because there still existed the same basic problem with Linux users ending up with files owned by UIDs other than their local account. It turns out that not every dev is running with the same UID. Okay, so now we need to map the UID and GID at image build time, but that might break things for people on macOS.

All of our Dockerfiles ended up with something like:

  ARG app_user_uid=61000
  ARG app_user_gid=61000
  ENV APP_USER="app"
  RUN groupadd -g $app_user_gid -o $APP_USER
  RUN useradd --no-log-init --create-home --shell /bin/false --gid $app_user_gid --uid $app_user_uid -o $APP_USER
And needed to be built on Linux with:

  docker-compose build --build-arg app_user_uid=$(id -u) --build-arg app_user_gid=$(id -g) ...
While macOS users used the simpler:

  docker-compose build ...
This all took quite some time to figure out. The person working on it would get things working, think it was all good, push it up, and only find out later that it wouldn't work on another dev's system. CI couldn't catch everything. The point of using Docker was to ensure there weren't inconsistencies against dev environments and that dev matched production. That seems like a fairly common use case, but we couldn't find anything on how to simplify this setup for teams other than mandating every user run the same exact system configuration. I have to believe we were doing something wrong, but we really couldn't find anything on the topic. I'd love to hear how you solved the problem.

[0] -- https://docs.docker.com/develop/dev-best-practices/


> But we're talking about what became easier, right? Not about what became more ideologically correct, semantic, etc.

Then think of it this way: it became easier to make the content/UI both good looking and fully accessible (e.g. to screen reader users, for whom layout tables are indeed a hindrance).

I think the same applies to the other hoops to jump through; they're solving a real problem, just perhaps one that doesn't directly affect many of us so we tend to ignore it.


Sorta. Screen readers, being external programs, really had no trouble dealing with tables. The tag wasn't semantic, sure; but the intent and reasoning wasn't exactly difficult.

I suspect that the giant explosion in primitives for a screen reader to deal with had actually made the job harder.

And this is ignoring the explosion in nested divs that most modern sites have.


Considering that I've developed a screen reader from the ground up, worked on the Narrator screen reader shipped with Windows, and routinely used screen readers to browse the web for almost 20 years, I think I know a thing or two about this.

So, it's true that screen readers have heuristics for detecting layout tables. But those heuristics aren't perfect. They're not even that sophisticated, at least the ones that I worked on. And some (most?) screen readers automatically announce any table that they don't detect as a layout table, even when continuously reading straight through the document. Notably, the screen reader I developed myself, which was the first one I routinely used, doesn't automatically announce tables unless the user is manually moving through the page, because I wanted to minimize verbal clutter when casually browsing the web. But when I went to work at Microsoft and started using Narrator to read my work email, I found the tables in certain HTML emails very annoying. Granted, Narrator didn't yet have a layout table heuristic at that time, but even when it did, that didn't entirely solve the problem. Luckily, there was also an overhaul in Narrator's verbosity levels while I was there, and that allowed me to eliminate the clutter of layout tables by reducing the verbosity level in my personal Narrator settings, perhaps at the cost of missing some other things.

Nested divs per se aren't a problem. Implementing a widget as a div without the appropriate ARIA markup is a problem.


This feels inline with what I intended. Used sparingly, tables probably weren't that atrocious to figure out. Similarly, deliberate use of semantic tags is likely better. I question if things are deliberate are much better now. But, you provide a convincing and authoritative answer there.

Html email feels like a trap. They are about turning up the volume and getting returns. Not about being accessible. Is that not the case?


About HTML email, I think it's different in a work context. When I was at Microsoft, some official messages, that were actually important to read, were laid out in the style of an HTML marketing email, complete with heavy abuse of layout tables. Ditto for automated notifications, e.g. about bugs or PRs.


Ah, that makes sense. And feels like a mistake from the department that sent the messages.

My gut is still that better tooling support for standard messages would have helped faster than more primitives in the message. But, I welcome evidence that I'm wrong.


It really depends. If table cell order happens to conform with a logical reading order, yeah, screen readers generally figured out when to ignore table semantics. But if the reading order does not make sense going row by row, then they are terribly broken.

Nested divs aren't that bad. (Ignoring overall performance considerations. And even then I'd suspect that's less a concern than download and running all these JS libraries.) Nested divs with a mish-mash of poorly applied ARIA can get pretty bad though!

Table layouts are also terrible for anyone who is not viewing the site at the browser width you designed it for. Accessibility is more than screen readers, and the ability to zoom or adjust browser width or display on a different device (hello mobile) is incredibly valuable for a lot of people. And I'd argue that the "ease" of using table layouts disappears as soon as you try to accommodate for any of that.


Right, my complaint in nested divs is more when they are a giant mix of different levels with no real discernable logic at the markup level. I think my memory is more from the garbage we made in early 2000s, though. I haven't tried to look at any modern page markup lately.


Yeah I may be the only one, but I still think async/await was a huge missed opportunity to get rid of promises like I proposed here for javascript:

https://es.discourse.group/t/callback-based-simplified-async...

It seems quite inneficient to me to wrap every function call in a caching layer and state machine that pretty much goes unused in the context of async/await and makes things unecessarily hard to undertand for new programmers.


Out of curiosity what were the

> [...] gotchas of .NET Core async/await APIs drove me away.

I kind of lost track of the framework to core changes but I think it became simpler with the removal of synchronization contexts?


Async/await is not for parallelism.


The article actually said "concurrency." In the case of the article, it's being oversold because it's not that much better than the old one thread per request model. Cross-request interactions are rare, so you're more or less writing the same code, only with async, you have to be careful not to accidentally do something CPU-intensive for 50ms, and if your runtime supports multiple cores (so not Ruby or Python), you still have to be careful with race conditions and locks.

Writing parallel code that isn't clearly a divide and conquer variant is hard. Async doesn't help with that at all; it gives you more efficient context switches.

I personally find threaded code easier to reason about because there can be a context switch anywhere, so you always need to think about locks and coordination. With aysnc code, a block of code without locks and be correct...until you add an await in the middle, and this isn't always obvious.


How about things that used to be easy and now are hard? Or things that used to be hard, and now are intractable?


I use to have a very boring job of compareing 2 forms and check for discrepancy. Quite a meaning less job nowday We just use tools like

https://www.freetextcompare.com/


What about spooling up 10,000 servers to do a job and then deleting them when it's finished? I've literally done this for image processing and it's not something that I could ever have done in the past.


What a great list of "Things I've been meaning to learn for years!"

I was all hip and cool once, and now feel like a dinosaur. This should help me catch up


I would add also PCBs for hardware prototyping. Ordering custom PCBs is ridiculously cheap and easy now.


  >>> VPNs, with Wireguard
Just tried it on iOS, please scratch it from the list


> VPNs, with Wireguard

I'd say: Tailscale.

I'd also add Kafka for integrating different systems/databases, etc.


> Writing code that runs on GPU in the browser (maybe with Unity?)

Is the author referring to WebAssembly ?



I'll add, "knowing what you're worth". I keep thinking levels.fyi, blind, etc. are underappreciated in their value as "disruptive" initiatives of the last few years, and will have a far greater social impact than bitcoin, self-driving cars, etc.


I really like that XKCD comic about identifying birds in a photo, so it's unfortunate that it didn't age that well. How could we rewrite that comic to today's tech? I was thinking something like:

A: I want to create an app that scans good feedback of our brand online...

B: there's a li rary for that, consider it done

A: ... and understand if they are actually being sarcastic.

B: I'm gonna need 5 years and a lab of experts.


It aged perfectly IMO. Maybe the timeline was a little bit off, but it did indeed take several years and a huge amount of effort by experts to make object recognition available "off the shelf".

And even then, problems can quickly escalate from "any programmer with a little experience can do it" to "we need a team of data scientists and data engineers".


Yeah, and even though Merlin and Google Photos (for a broader set of things) are pretty amazing, a naturalist familiar with the area will probably still blow them with the things they can recognize quickly even without having a great view.


When was that XKCD published (https://xkcd.com/1425/) ? Yeah, I remember seeing for the first time, I don't remember my reaction then. But now when I see it, I am stumped!


September 24th, 2014.

To know the date of publication of a given xkcd comic, go to https://xkcd.com/archive/, search for the title you're after, then the date of publication is displayed in the mouse-over text.


So, five years and a pandemic. Not too shabby of a prediction, that!


Somewhat relevant:

TornadoGuard https://xkcd.com/937/


"Building fast programs, with Go/Rust"

Sure, bud.


Could you please stop posting unsubstantive and/or flamebait comments to HN? You've been doing it a lot, unfortunately, and we ban that sort of account. It's not what this site is for, and it destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


In contrast to building fast root escalation exploits which perform useful functionality on the side, which is what we get with less safe memory models.


They are both so much more pleasant to work with than C.


Golang was created from the same people of C, it's what C++ should have been. It takes lots of background from plan9, the 'Unix++' OS.


> it's what C++ should have been

No. There's a large class of programs that you can't write in Go that you can write in C++.


No. I meant that C++ should have never been born. For high perf software C should be enough, while Go could work great for generic system binaries.

Today's low end machines are not a Pentium 2 or 3, but a Raspberry Pi B+ with 512 MB of RAM (very low end, a real life machine would be a Pentium 4 with SSE2 and 1GB of RAM or a Core Duo with 2). Enough for Go and statically linked binaries a la plan9/9front.


> No. I meant that C++ should have never been born

C++ was an important stepping stone.

There's still a large class of software for which GC-ed languages are non-starters.


I said that for that C would be more than adequate. And, yet, Go's GC can be tuned. Not for an AAA game, ok. As for C, C + SDL2 would be a good backend for any PC gaming engine.

And if we had Go instead of C++ since mid 90's, today's Go compiler would be much more performant, for sure.


Easy doesn’t (always) equal good.


Laundry list of terrible stuff.

Let's Encrypt is a headache and roots your box.

Concurrency was solved eons ago with common libraries widely used today. Erlang has had it as a design requirement since the beginning.

Centering in CSS! Heck isn't that just an HTML tag?

Building fast programs... with a certain language. Admitting it's slow? I don't get it.

I could go through the whole little list like this.


> Building cross-platform GUIs, with Electron

How about no. I so want this abomination of a technology uninvented.


Please don't take HN threads on repetitive flamewar tangents.

https://news.ycombinator.com/newsguidelines.html


> I so want this abomination of a technology uninvented.

Good enough beats perfect almost every time. The real fail is that the major desktop OSes have spent the last 20 years jockeying for a monopoly on how software is made on and distributed their platforms. Apple and MS (and even the Linux desktops to a lesser degree) want me to have to build and maintain completely separate apps for their platforms to unlock all of the capability in their GUI.

You know what I want as a user? I want to be able to buy Affinity Designer and run in on whatever device I have. I don't want to care about Mac, Windows, Linux Chromebook or whatever. I would love to have some real Adobe apps on Linux.

As a software developer, it's about time that we stop having the native vs. whatever universal GUI layer debate, too. I've never seen a single ticket for an Electron app where the user was reporting that my app wasn't MacOS-y or Windows-y enough. I have seen plenty of tickets with native apps where something isn't right with the clipboard, print menu or or some component wouldn't correctly embed on a paste.


Almost everyone is making native mobile apps — two copies of the same thing, for Android and iOS — but for desktop, it's suddenly something infeasible? Why so? It has always been other way around in my mind — my phone is an auxiliary limited communication device that's awkward to type on. Yet for almost every IT company with a mainstream product, phones are first-class, and computers are an afterthought. How so?


More users.


I really don’t care at all about the resource usage or speed of Electron apps in 2022.

Any semi-modern PC has no problem running Electron apps. Download sizes aren’t an issue with modern internet speeds. It doesn’t make sense to try to optimize for people with the slowest systems when everything is only getting faster.


Ah yes. Semi-modern PC shouldn't have trouble... Download sizes shouldn't be a problem...

Do you realize that you are essentially running a supercomputer? And that this supercomputer is struglling to display even the simplest things? Slack's CPU usage shoots up to 20% of cpu to display a single animated reaction to a post. Run a few of those "shouldn't have trouble" in parallel, and suddenly a computer that can run a 4k next-gen Unreal shooter at 60fps struggles to even maintain cursor latency.


Even in the sort of modern systems available to wealthy developers try to run more than two Electron apps at a time and tell me if you don't feel the slugginess


> try to run more than two Electron apps at a time and tell me if you don't feel the slugginess

I run more than two Electron apps at a time and everything is fine.

Unless you’re running ancient hardware with 4GB of RAM and trying to load up many intensive applications (Electron or otherwise) it’s not a problem. If anything gets “sluggish” then you’re probably swapping, but it’s difficult to actually reach that point with just two Electron apps.


A fast CPU and plentiful RAM isn't a license to not optimize your code. A fast internet connection isn't a license to not optimize your binary size.


I keep hearing that argument, but I also dropped Evernote for Apple notes because Evernote was too slow, even on my M1 mac. So there is definitely such a thing as optimizing your programs.


That argument is usually made by the kind of people who consider developer experience the most important part of software development. I'm on the opposite end of this spectrum: I couldn't care less about developer experience, user experience is the king.

Your users won't see your "write less code" crap and all your other "elegant" overengineered solutions that use 5 libraries for something that should be a trivial 10-line function. They'll see that your app is an unreliable resource hog that looks and feels like it's not from this world. And it's never finished. It's constantly updating but always broken one way or another. And they'll suck it up because there are no alternatives and most software these days is like this.

I'll do everything in my power to change this. It's my mission in this world. I want people to rediscover non-crappy software and have their minds blown by what modern computers can do when programmed by people who actually know what they're doing.


Be careful about the baby/birtwater situation. I can deliver a website a lot faster if I get to write it in typescript than if I have to write it in C++, even if the latter might execute faster.


There's a big difference between build-time and runtime dependencies. It doesn't matter what you use in your build environment, so yes, I do use TypeScript in my own web projects. It does matter a lot what you leave in your final product, so I'm never using react or any other JS frameworks.

I also use Java for my backend because not having to manage memory manually is worth of the small performance tradeoff.


More resource usage = more energy wasted


Then create an open source cross-platform GUI framework/toolkit that's technologically superior to Electron and also at least as easy and convenient to use and deploy, and people will dump Electron in droves.

I mean, please do! I would be delighted about an efficient, cross-platform, cross-language, convenient GUI toolkit. But I'm not holding my breath, and I'm tired of people complaining about Electron without offering an alternative that's superior along all relevant axes.


Cross-platform GUIs shouldn't exist. That's it. They always inevitably are worse then native ones.


You're falling for the "perfect is the enemy of good" fallacy.


I'm just not a fan of half-measures. Especially ones that stick in people's minds as an acceptable way of doing things as opposed to rapid prototyping kind of deal.


QT5.


It is as thought browser/javascript is the only thing out there now. Or at least appears to be implied in the comments.

Just wanted to state that in other worlds, like C++, there exist tools like wxWidgets which provide cross platform GUI.


I know, of course. But at least on macOS, I can usually instantly tell when something is using a cross-platform UI library. Even if controls look native, they're usually laid out wrong and don't follow platform's conventions.

There are examples of the opposite, of course. IntelliJ IDEs, despite using modified Swing GUI, feel native-ish to me because Jetbrains put in enough effort to replicate macOS native behaviors.


But it's the only reason Discord, Slack, VS Code, and other apps exist on operating systems other than Windows/MacOS.


Those are built by large teams with a lot of backing, and still suffer from many issues (mainly performance; Slack crawls very badly, VSCode and Discord not so much); I think it's gotten easier but still wouldn't call it "easy".


Yeah. Slack and Discord sure have enough resources to build three dedicated native apps for macOS, Windows, and Linux. Slack has been through several full rewrites yet still works terrible. At this point it must be nothing but stubbornness. It's definitely not an informed engineering decision.


> It's definitely not an informed engineering decision.

I think it's more likely that they are evaluating the engineering decision by different criteria than you do. The people in most organizations that make the Slack buying decision probably don't know what Electron is. And frankly, Electron Slack works just fine on any modern Windows/Mac business laptop I've used it on.


> Slack and Discord sure have enough resources to build three dedicated native apps for macOS, Windows, and Linux.

Those "insanely complex apps" are... nothing but virtual lists with text and some images. You know that even first-year comp-sci students are given harder tasks?


The reason being those are proprietary tools.


Tbh the world would've been better off if these apps didn't exist. Especially VS Code.


>Especially VS Code.

Why? :c The next best thing is Atom. And, tragically, Atom is way worse. (Believe me, I wanted it to win the hackable-lite-IDE war, but... it didn't.) I don't like Microsoft, but VS Code won. Although, technically, I use "Code - OSS", which is the no-proprietary-code version.


Atom is also Electron-based. Why Electron text editors at all? I personally use Sublime and vim and don't even consider the electron ones.


If Sublime is as good, then you got me. :p (I can't say; I still need to try it.)

I love Kate, although it's less configurable/powerful than Code/Atom, but it's Linux-only.

As to vim, I use it for quick things, but I'm not hardcore enough to feel comfortable with editing in a terminal and having to re-memorize how to highlight and copy-paste text.


> Tbh the world would've been better off if these apps didn't exist. Especially VS Code.

You're confusing "I personally don't like X" with "the world is better off without X".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: