Right, for small scripting, not for the majority of the app. All the backend interaction is in C++.
Like, electron is fine, but it's orders of magnitude slower than it needs to be for the functionality it brings. Which is just not ideal for many desktop applications or, especially, the shell itself.
Ultimately people use electron because they know HTML, CSS, and JS/TS. And, I guess, companies think engineers are too stupid to learn anything else, even though thats not the case. There is a strong argument for electron. But not for Linux userland dev, where many developers already know Qt like the back of their hand.
As a longtime musician, I fervently believe in doing the best you can with the tools you have.
As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.
I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.
It seems to me that LLMs do that to an unprecedented degree.
Is it possible to use them to help you make worthwhile, human-focused output?
Sure, I'd accept that's possible.
Are the tools inherently inclined in the opposite direction?
It sure looks that way to me.
Should every tool be embraced and accepted?
I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.
The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.
If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.
I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.
Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.
ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated
I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"
If he (or his employees) are actually exploring genuinely new, promising approaches to AGI, keeping them secret helps avoid a breakneck arms race like the one LLM vendors are currently engaged in.
Situations like that do not increase all participants' level of caution.
reply