As a pedestrian I find cyclist are worse than cars for obstructing my path.
Riding on the footpath (illegal here) even with bike lane available right next to it, not respectig the traffic lights (mowing through pedestrians on crossings or blocking pedestrian crossings when stopped on red light), parking by blocking the footpath (must leave 1.5m of footpath unobstructed), riding the wrong way through traffic, flying down bike lanes (40kmh limit) and raging when anyone infringes their "rights" when they respect noone.
In my experience, I estimate that 20% of car drivers are a-holes, 50% or truck drivers and 80% of cyclists.
I would like for Linux to be able to replace Windows.
I run Linux on some of my computers with various levels of success.
But even with Windows 11 being as annoying as it is and Ubuntu/Mint/Cachy/Fedora/etc having some really good points they are not as easy to use as Windows.
Sure, web browsing is almost the same and simple home office tasks are close enough.
But all of the complaints that GP has mentioned are valid.
Windows file chooser is essentially small Windows Explorer and you can do almost everything that you can in the explorer while you are in file chooser mode. None of the Linux desktops have anything close.
HiDPI and multi monitor scaling on Linux has gotten better and it might approach what Windows had for the last 10 years but it is not 100% there yet.
Wayland is just a protocol with many incomplete and incompatible extensions that may or may not be implemented by your DE.
VNC is not even remotely close to RDP in features or performance. It just isn't.
I have used RDP over dial-up that was more responsive that VNC over LAN.
Not to mention sound, printer, disks, USB, etc all being available over one RDP connection.
Accessibility on Linux is a joke. On screen keyboard may work 80% of the time, screen reader might work 20% of the time. Sound might come out of random output or it might not. You may have to play with random settings, good luck with that if you are vision impaired.
One big reason Linux isn't there yet is people who just dismiss all of the above and go with "it works for me so it must be good for everyone."
The GTK file picker, which is frustratingly the default even on most KDE installs, is the one that sucks. The KDE-native one would much more closely match the experience you're looking for.
VNC is highly dependent on implementation. Sunshine/Moonlight runs circles around RDP in terms of performance and includes audio. For situations where you need the extra functionality is RDP... You can just use RDP. It works just fine on Linux, especially if you're on recent KDE.
On-screen keyboards are admittedly a pain point, but I've usually seen people say nicer things about the screen readers than Windows. Probably lots of different experiences depending on implementation.
> Windows file chooser is essentially small Windows Explorer and you can do almost everything that you can in the explorer while you are in file chooser mode. None of the Linux desktops have anything close.
I remember KDE copying that a few years after Microsoft introduced Active Desktop. That was, what, 25 years ago now?
Having seen LLMs so many time produce incoherent, nonsense, invalid answers to even simplest of questions I cannot agree with categorization of "thinking" or "intelligence" that applies to these models.
LLMs do not understand what they "know" or what they output. All they "know" is that based on training data this is most likely what they should output + some intentional randomization to make it seem more "human like".
This also makes it seem like they create new and previously unseen outputs but that could be achieved with simple dictionary and random number generator and no-one would call that thinking or intelligent as it is obvious that it isn't.
LLMs are better at obfuscating this fact by producing more sensible output than just random words.
LLMs can still be useful but they are a dead-end as far as "true" AI goes. They can and will get better but they will never be intelligent or think in the sense that most humans would agree those terms apply.
Some other form of hardware/software combination might get closer to AI or even achieve full AI and even sentience but that will not happen with LLMs and current hardware and software.
If the payload expands to something too large then it is easy to detect and ignore. Serve up thousands of 10kb or 100kb files that expand to 10s of MB with random garbage inside...possibly the same text but slightly modified. That will waste the time and CPU cycles and provide no value to them. Maybe also add a message you want to amplify so AI bots train on it.
The problem is that believable content doesn't compress well. You aren't going to get anywhere close to that 1:1000 compression ratio unless it's just a single word/character repeated thousands of times.
It's a choice between sending them some big files that will be filtered out long before they can do any real damage or sending them nonsense text that might actually make it's way into their training data.
We currently don't really know what intelligence is so we don't have a good definition of what to expect from "AI" but anyone who has used current "AI" for anything other than chat or search should recognize that "AI" is not "I" at all.
The "AI" does not "know" anything. It is really a fuzzy search on an "mp3" database (compressed with loss resulting in poor quality).
Based on that, everyone who is claiming current "AI" technology is any kind of intelligence has either fallen for the hype sold by the "AI" tech companies or is the "AI" tech company (or associated) and is trying to sell you their "AI" model subscription or getting you to invest in it.
My work is basically just guessing all the time. Sure I am incredibly lucky, seeing my coworkers the Oracle and the Necromancer do their work does not instill a feeling that we know much. For some reason the powers just flow the right way when we say the right incantations.
We bullshit a lot, we try not to but the more unfamiliar the territory the more unsupported claims. This is not deceit though.
The problem with LLMs is that they need to feel success. When we can not judge our own success, when it is impossible to feel the energy where everything aligns, this is the time when we have the most failures. We take a lot for granted and just work off that but most of the time I need some kind of confirmation that what I know is correct. That is when our work is the best when we leave the unknown.
UTF8 is a horrible design.
The only reason it was widely adopted was backwards compatibility with ASCII.
There are large number of invalid byte combinations that have to be discarded.
Parsing forward is complex even before taking invalid byte combinations in account and parsing backwards is even worse.
Compare that to UTF16 where parsing forward and backwards are simpler than UTF8 and if there is invalid surrogate combination, one can assume it is valid UCS2 char.
UTF-16 is an abomination. It's only easy to parse because it's artificially limited to 1 or 2 code units. It's an ugly hack that requires reserving 2048 code points ("surrogates") from the Unicode table just for the encoding itself.
It's also the reason why Unicode has a limit of about 1.1 million code points: without UTF-16, we could have over 2 billion (which is the UTF-8 limit).
Runtime flexibility is not restricted to dynamically typed languages, it just happens to be less available in some of the popular statically typed languages.
Error handling, expressiveness, testing culture, meta-programming and gradual typing have nothing to do with static vs dynamic typing.
The main "advantage" of dynamically typed languages is that you can start writing code now and not thing about it too much. Then you discover all the problems at runtime...forever.
Statically typed languages force you to think about what you are doing in advance a lot more which can help you avoid some structural issues. Then when you do refactor computer helps you find all the places where you need to change things.
Dynamically typed languages force you to write more tests that are not required in statically typed languages and that might prompt you to write other tests but if also increases the chance you just give up when you start refactoring.
Finally, after some time has passed and few updates have been applied to the language and libraries, you may not have a working project anymore. With statically typed languages you can usually find and fix all the compile errors and have the fully working project again. With dynamically typed languages, you will never know until you explore every line of code, which will usually happen at runtime and on the client computer.
i see things like that written almost on the daily but i've never seen it come to pass.
this is anecdotal but i've worked professionally in ruby and clojure and it was a pretty good experience. java/kotlin/scala made me wish i could go back to dynamic land...
these days i'm in a rust shop but i keep clojure for all my scripts/small programs needs, one day i intend to go back to some kind of lisp full time though.
Sure you're right on most of this, but allow me a slight pushback here. I am sorry, I am inclined to use Clojure/Lisp in my examples, but only because of its recency in my toolbelt, I could probably come up with similar Elexir examples, but I lack intimate familiarity with it.
- Dynamic languages can harbor bugs that only surface in production, sometimes in rarely-executed code paths, yes. However, some dynamically typed languages do offer various tools to mitigate that. For example, take Clojurescript - dynamically/strongly typed language and let's compare it with Typescript. Type safety of compiled Typescript completely evaporates at runtime - type annotations are gone, leaving you open to potential type mismatches at API boundaries. There's no protection against other JS code that doesn't respect your types. In comparison, Clojurescript retains its strong typing guarantees at runtime.
This is why many TS projects end up adding runtime validation libraries (like Zod or io-ts) to get back some of that runtime safety - essentially manually adding what CLJS provides more naturally.
If you add Malli or Spec to that, then you can express constraints that would make Typescript's type system look primitive - simple things like "The end-date must be after start-date" would make you write some boilerplate - in CLjS it's a simple two-liner.
- Static type systems absolutely shine for refactoring assistance, that's true. However, structural editing in Lisp is a powerful refactoring tool that offers different advantages than static typing. I'm sorry once again for changing the goalposts - I just can't speak specifically for Elixir on this point. Structural editing guarantees syntactic correctness, gives you semantic-preserving transformations, allows fearless large-scale restructuring. You can even easily write refactoring functions that manipulate your codebase programmatically.
- Yes, static typing does encourage (or require) more deliberate API design and data modeling early on, which can prevent architectural mistakes. On the other hand many dynamically typed systems allow you to prototype and build much more rapidly.
- Long-term maintenance, sure, I'll give a point to statically typed systems here, but honestly, some dynamically typed languages are really, really good in that aspect. Not every single dynamic language is doomed to "write once, debug forever" characterization. Emacs is a great example - some code in it is from 1980s and it still runs perfectly today - there's almost legendary backward compatibility.
Pragmatically speaking, from my long-term experience of writing code in various programming languages, the outcome often depends not on technical things but cultural factors. A team working with an incredibly flexible and sophisticated static type system can sometimes create horrifically complex, unmaintainable codebases and the opposite is equally true. There's just not enough irrefutable proof either way for granting any tactical or strategic advantage in a general sense. And I'm afraid there will never be any and we'll all be doomed to succumb to endless debates on this topic.
I don't know what I am doing wrong but nothing written in Python has ever worked for me.
I download the .py repo from from github or wherever and try to run it - errors.
I try to install missing libraries pip this and that - errors.
I battle fixing endless error with dependencies and when the .py finally runs - errors - wrong version of whatever library or wrong patch of this or that or the "production ready" .py does not work correctly on Windows or single file script uses excel library that has changed in incompatible ways 3 times in 2 years.
I Download all the files from the flashy looking web site and follow all the instructions to the letter - errors.
Python is anything but "robust".
It is the most fragile environment in the universe, worse than c, c++ and javascript put togeter, at least this is my experience with it.
That’s wild. The last time I had that experience with Python must have been more than 10 years ago. I’ll admit a decade ago you definitely did need to know way to many details about way too many tools from virtualenvs to setuotools to wheels, etc. to get things working from somebody else’s project, but imho, poetry and uv have really changed all that.
This is exactly my experience too. I avoid python software like the plague, because even when it does work, I can not rely on it continuing to work when python gets updated on my system.
You cant really make a judgement on python from this, as your story just screams user error. I have never encountered anything of what you say, and I have been programming in Python from before I had any clue what I did.
My own experience could hardly be more different. For that matter, I have never had to even look at a "flashy looking web site and follow all the instructions to the letter" in order to install something.
For example, to install yt-dlp, I followed these steps:
sudo apt install pipx
pipx install yt-dlp
Actually, only the second one, because I already had pipx (https://pipx.pypa.io/ — a wrapper for pip that does basic virtual environment management) installed.
Can you name some specific things in Python you have tried to use, and give more concrete descriptions of how you tried to set them up?
I don't like to be seen as defending Microsoft, they definitely have their share of faults, but as far as business goes, I think Microsoft is the least likely company to screw you over as a (business) customer.
Microsoft has kept old software working pretty much unchanged for the last 20 years. I know, I still have software built on early Windows 95/NT4 that works fine on Windows 11...and with some registry tweaks Windows 11 will run on a computer from 2005 without too many issues (sure, 3rd party security software and js-heavy web pages will be slow but that is not directly MS fault).
Windows 10 EOL in 2025 is only for consumer level stuff, you can get Windows 10 support for enterprise for another 2 years at least and some versions even up to 2029, so again, if you are a business, you are taken care of (if you are "cheaping" you way with Windows Home and Pro in business then you kind of get what you pay for, I am sure you as a business don't give away free products/services for years on end). And you can keep using your Windows 10 after EOL, not like they lock you out, they just don't support you...just like you don't repair stuff for free after warranty end.
Compare that to any other tech company that churns through HW and SW much faster and much more severely where old HW and SW no longer works or cannot connect to the internet or use the latest browser so you cannot connect to the latest HTTPS servers. Even open source software breaks compatibilty with older versions much more oftern than Microsoft, but since that is "free" people just shrug it off.
We're building the business, i.e. setting the foundations we expect to stand on for decades to come. Enterprise license that might be possible to extend into the medium term isn't good enough for our long term commitments and the time to adapt to an alternative family of operating systems is now.
As for stability, if you learned GUI Ubuntu twenty years ago you'll be right at home in contemporary Debian systems, while someone hopping from XP or Server 2000 into 10 or 11 would be quite confused for quite some time. Xenial (2016), Bionic (2018) and Fossa (2020) will likely get twenty years of security updates each, into the beginning of the 2030s.
I think something similar holds for the SoftMaker office suite. If you learned TextMaker twenty years ago I believe you'll be less annoyed by their 2024 release than if you learned Office 2003 and get dropped into the 365 style applications. Personally I'd use something else entirely, likely doing a roundtrip through LaTeX or straight PostScript under the hood, but it will be interesting to evaluate some MICROS~1 Office alternatives in my organisation and see what, if anything, sticks.
Microsoft's primary business is software running on Microsoft platforms. In the past that was Windows, nowadays it's also Azure.
That famous "developers developers developers" video with Steve Ballmer was a prime example of that corporate ethos.
For most other giant companies in tech, either the primary business is selling a product (and killing competitors) or giving products away as loss leaders and making bank on advertisement
They always had that option but somehow it was cheaper to ship there and back than to manufacture in US.
So either you get 2x the tariff increase in price or you manufacture locally for something that increases the price by more than 2x shipping.
Oh no, indeed.
It’s not that it’s cheaper overseas, rather it’s often not available domestically at any price. People don’t realize just how absolutely gutted America’s commodity/heavy industrial production capacity really is and the whole system makes it nearly impossible to rebuild.
reply