Hacker Newsnew | past | comments | ask | show | jobs | submit | 72deluxe's commentslogin

I used wxWidgets to build a native GUI for Windows + Mac 10+ years ago and implemented all GUI-drawing (it was an audio signal processor control software so included meters, faders, knobs, audio spectrum and I even incorporated Horde3D OpenGL interface for visualising an arena [sadly never fully finished to full potential as my modelling abilities in Blender simply wasn't good enough]). I wrote that, and another guy wrote the network library in C that sent signals to the network devices, and received them. I responded to the incoming network info to draw appropriate parts of the UI like meters/scopes at 50ms minimum.

The fact that we did this as a 1-man team for the GUI and that I can still compile it today (if I had the code) against wxWidgets, to then run on macOS and Windows simply shows the lazy nature of (most/all?) desktop apps by big companies these days.

I utterly detest using them, but it seems customers think an app that takes 5 seconds to launch with a spinning whirly wheel and horizontal gradient animation over list views for 5+ seconds before content is loaded is perfectly acceptable. Quality with a capital K!


How can C++ not be the "right" language? It seems to meet all the requirements for event-driven GUIs - event handlers are function callbacks after all...

C++ works, but compared to other languages it's often no longer the most productive choice for UI work. Modern UI code is mostly glue and state management, where fast iteration matters more than squeezing out maximum performance. And when performance does matter, there are also newer, safer languages.

For teams comfortable with C++ or with existing C++ libraries to integrate, it can of course still be a strong choice, just not the preferred one for most current teams.


But desktop C++ isn't difficult or slow to write...

It seems odd to me that the software world has gone in the direction of "quick to write - slow to run". It should be the other way around. Things of quality (eg. paintings by Renaissance masters) took time to create, despite being quick to observe.

It also seems proven that releasing software quickly ("fast iteration") doesn't lead to quality - see how many releases of the YouTube app or Netflix there are on iOS or Android; if speedy releases are important, it is valuing rush to production over quality, much like a processed food version of edible content.

In a world that is also facing energy issues, sluggish and inefficient performance should be shunned, not welcomed?

I suppose this mentality is endemic, and why we see a raft of cruddy slow software these days, where upcoming developers ("current teams") no longer value performance over ease of their job. It can only get worse if the "it's good enough" mentality persists. It's quite sad.


The part that takes time in UI isn’t wiring up components, it’s the small changes like something is a pixel to the right or that gap is two pixels wide. Changing those in a C++ project means recompiling and that adds up to significant overhead over a day of polishing the UI. If C++ was able to get builds out in less than a second, this wouldn’t be an issue. People value performance in their own tools more than the tools of their customer.

In wxWidgets you use sizers, so you don't work on pixel-level alignments. I can understand if you're using an ancient framework like MFC, but even then I seem to recall there was a sizer equivalent system (or it is easy enough to write a class to do so, moving components).

I think it is a daft thing to move to shipping a colossal web framework and entire browser simply because of 1px UI alignments (which have been a solved problem for decades in C++ anyway).


In modern Qt you don't write UI in C++ anymore - you do that in QML. It is far simpler to create amazing pixel perfect UIs with drooling-inducing animations in QML. I wrote a blog post that talks a bit about this[1].

[1] https://rubymamistvalove.com/block-editor


I don't understand why he thinks all users will use WSL in Windows. I have never ever touched it, and I've developed on Windows for decades (C++/C#/JS/web). It seems like trying to make Windows non-native or some semi-Linux.

I also have never touched Docker on Linux, despite having used that from RedHat 6.0 days (Fedora, Ubuntu LTS now).

Also, he missed out Shotcut as a decent video editor. It recently enabled a 10bit workflow (plus the Frei0r plugins are easy enough to write for it, if you so desire).


I don't understand how it got so bad. On Windows 95 or 98, you knew that pressing Windows > P > across right > N would open Notepad in about 22 milliseconds of interaction. Things just worked and responded.

Today it's utter garbage.


How do you find macOS Tahoe? I have deliberately avoided installing it on my M3 MacBook Air that I use for work mainly due to the lack of attention to detail they seem to have dumped on the UI.

I have used a Mac at work on/off since the Snow Leopard days and I think Snow Leopard made the most sense from a UI point of view, without wedging in iCloud file nonsense.

I have a Windows 10 machine at home for gaming / development but my daily driver at home is a Linux M910 Lenovo (small enough and powerful enough for C++ dev), along with a Windows 11 mini Lenovo machine for GeForce Now usage on a TV in the house, but do I hate using Windows 11.


I think the UI was reasonable and easily understood in 2000. After that it seems links and buttons became interchangeable, and now we end up in the mess where scrollbars may or may not be visible until you try fiddling with the UI etc.

I would have to agree with this. I don't understand people how say developing on Linux is somehow better. I have built C++ software across Windows, macOS and Linux and I can't say one is easier than the other at all. Perhaps it is because of the package management system that makes installing a compiler "easier" than downloading Xcode or downloading/running the Visual Studio installer??

I certainly don't find development tools better on Linux, particularly for C++ debugging. Windows/Visual Studio is the leader in that regard.

I have also done C#, PHP, Java, JS + web development across all 3 and don't see the difference.


I have set up ntfy on a Pi at home, and use it to send me Android notifications of headlines every morning.

This is by a bash script in a cron job that reads RSS feeds and grabs the headlines and links to articles, so I can get a flurry of tech news and general news headlines without having to go into detail on each topic (which in news terms is typically slanted with some sort of bias).

So I can stay up to date on general happenings, speedily. It is fairly simple to set up - a LLM will write a suitable bash script to parse RSS XML and grab links and headlines in moments.


Do you also believe you should discuss every other topic under the sun in the belief that not discussing it is "ceding ground" to a viewpoint or action of others?

It would seem that in your view, we should be discussing all things at all times due to this "oppressor" mindset.

This simply cannot be true.


You're clearly being facetious. I will simply say that the rise of Naziism/Authoritarianism in the west is a preeminent threat to all human beings on the planet, especially when the king of all nazis has access to the nuclear missiles. It obviously demands a sense of urgency that other topics don't.

If you disagree with that, be explicit about exactly what part.


There are lots of threats to all human beings on the planet. You are simply taking the view that the thing you personally see as the biggest threat is therefore the thing that everyone else should see as the biggest threat (they won't) and agree with your view (they will not), casting your view of the world as the single truth (it isn't), then further presenting it in a ultimatum-based combative manner, asserting that other people take action based on your narrow definition (they won't).

This prevents any reasonable discourse so is unlikely to be successful, no matter how dogmatically or assertively you present your case. Others will simply perceive it as unreasonable, I suspect.

This doesn't mean that you can't still believe it to be true - it just means that others will not engage with the points you mention, and you'll then end up being more isolated and frustrated.


I started watching it but the modern presentation style of shouting everything instead of speaking at a normal volume, and using many many gestures and facial expressions to state a simple sentence made me switch it off rapidly.

It seems to be a presentation style afflicting the YouTube generation, where they think you want to see a colossal microphone in someone's face (directional microphones work very well, and is a reason you can hear dialogue in a film without a mic in front of everyone) whilst they gesticulate wildly and over-emphasise words in a sentence. It is quite wearying; perhaps it's because I am British, but it is afflicting general conversation where only superlatives can be used ("awesome" "amazing" "insane" "mind-blowing") instead of "good"/"enjoyable".


I guess that's modern yt. At least it's not an AI slop channel.


That's true, but still it's small mercies!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: