It will be a mix. There is certainly still a lot which can be done on the demand side, e.g. when to cool a cool storage house during the day, or when to run certain production lines.
No, capacity factor is a distraction. The only meaningful question is whether it is cost effective or not compared to other production methods,, and solar - taking into account the low capacity factor - is starting to look very good.
"That makes everything slow, inefficient, and widely dangerous."
There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.
Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
Neither "ODR violations" nor IFNDR exist in C. Incompatibility across translation units can cause undefined behavior in C, but this can easily be avoided.
The ODR problem is much more benign in C. Undefined behavior at translation time (~ IFNDR) still exists in C but for C2y we have removed most of it already.
You can't fundamentally solve the issue of what happens if you call a function in another TU that takes a T but the caller and the callee have a different definition of T.
Whether you call that IFNDR or UB doesn't make much of a difference.
C++ mitigates that issue with its mangling (which checks the type name is the same), Rust goes the extra mile and puts a hash of the whole definition of the arguments in the symbol name.
C has the most unsafe solution (no mitigation at all).
C++ ODR requires different definition to consist of the same tokens, and if not the the program is IFNDR. Name mangling catches some stuff, but this becomes less relevant today with more things being generated via templates from headers.
In C, it is UB when the types are not compatible, which is more robust. In practice it also easy to avoid with the same solution as in C++, i.e. there is a single header which declares the object. But even if not, tooling can check consistency across TU it is just not required by the ISO standard (which Rust does not have, so the comparison makes no sense). In practice, with GCC a LTO build detects inconsistencies.
Different parts of the build seeing inconsistent definitions of the same name is a clear consequence of building things piecemeal rather than as a single project -- which is precisely the problem I described higher up in this thread.
Things being built piecemeal also likely won't be using LTO (even if fat LTO allows this, no static library packages in a distro are built with it).
Not sure what you are trying to say. Inconsistent definitions is a consequence of being able to build things separately, which is a major feature of C and C++, although in C++ it does not work well anymore. The reason to not build things piecemeal is often that LTO is more expensive. But occasionally running this will catch violations. When libraries get split up, the interface is defined via headers and there is little risk to get an inconsistency. So really do not think there is a major problem in C.
IMHO the security advantage of Wayland is mostly a myth and probably the same is true regarding tearing. The later is probably more an issue with respect to drivers and defaults.
On my desktop computers and on most of my laptops I have never experienced tearing in X11, at least during the last 25 years, using mostly NVIDIA GPUs, but also Intel GPUs and AMD GPUs.
I have experienced tearing only once, on a laptop about 10 years ago, which used NVIDIA Optimus, i.e. an NVIDIA GPU without direct video output, which used the Intel GPU to provide outputs. NVIDIA Optimus was a known source of problems in Linux and unlike with any separate NVIDIA GPU, which always worked out-of-the-box without any problems for me, with that NVIDIA Optimus I had to fiddle with the settings for a couple of days until I solved all problems, including the tearing problem.
Perhaps Wayland never had tearing problems, but I have used X11 for several decades on a variety of desktops and laptops and tearing has almost never been a problem.
However, most of the time I have used only NVIDIA or Intel GPUs for display and it seems that most complaints about tearing have been about AMD. I have always used and I am still using AMD GPUs too, but I use those for computations, not connected to monitors, so I do not know if they could have tearing problems.
reply