I don't think C should be disparaged at all. Just because I prefer torx screws doesn't mean phillips screws were a horrible idea. They were brilliantly simple and enormously effective. I can't think of a single situation in which I wouldn't prefer torx, but torx wasn't an option historically and phillips was not the wrong decision at the time. Times change.
China was already sending troops and material to the front lines when MacArthur was ordered to stand down. Pushing further would have meant a hot war with China.
There is no way we could match them in numbers on the ground. Such a conflict would have inevitably led to us nuking them as a result. Which is probably the reason decision makers chose not to.
And maybe that's really the humanitarian failure. That USA didn't nuke China in 1950 or 1951. Would have solved a lot of problems for generations of people.
Nukes usually don't wipe out entire countries, especially tactical nukes.
I'm far from convinced that using nukes in the Korean War would've been a good move, but equating it with "kill[ing] them all" is completely dishonest. What's your goal in this debate, and is it served by dishonest rhetoric?
USA dropping nukes would have prevented the convention against using nukes in wars from being started. I think there's a pretty good chance we wouldn't have any civilization left by now if we went down that fork in history.
How is nuking Japan different from nuking Korea? Everybody agrees that forcing Japan to surrender with nukes was much better for everyone involved than a ground invasion.
When Japan was bombed, nobody else in the world had nuclear weapons, the US only had 2, and there were only a handful of people outside of the US seriously researching nuclear weapons and were still years away from a test. By 1950 the USSR had working nuclear bombs, had proven so with a nuclear test, and a dozen other countries had started their own nuclear weapons programs.
Maybe the real humanitarian failure is that the US didn't nuke everybody and start over from the stone age. Can't any societal problems if no societies exist, right?
Obligatory note that non-coding DNA sequences are often involved in expression regulation, DNA folding, and other interactions which aren't yet well understood. Just because a section of DNA does not encode a protein does not mean it's inactive in other life processes.
The conflicting beliefs seem to allow for falsifiability and thus experiment.
Case 1: long stretches of "non-coding" DNA indeed are "useless", but then also a material and energetic drain.
Case 2: long stretches of "non-coding" DNA actually have a use, and are thus a proliferative gain.
Case 3: for some stretches case 1 holds and for others case 2 holds.
Suppose a specific stretch is questioned for utility: prepare a corpus of organisms with the stretch intact and with the stretch removed (so there is identical genetic diversity in both corpuses).
Then let a minority of "intact" organisms compete against a majority of "genome light" organisms, repeat a few times.
Also let a minority of "genome light" organisms compete against a majority of "intact" organisms.
If case 1 holds for a specific stretch: the modified "genome light" organism will have a selective advantage due to energy and materials savings when duplicating genomes.
If case 2 holds for the same stretch: the unmodified "intact" organisms will have a selective advantage.
We will likely continue to discover ways in which non-coding DNA is used by life, however there is no question that non-coding DNA is far from "useless" and hasn't been for some time.
Within non-coding DNA there do exist some sections with no known biological function which some people call "Junk DNA" however, there is much disagreement about this, and we have only relatively recently begun to directly image structures on the scale of DNA and proteins in situ via cryo-electron-microscopy, allowing us to study the mechanisms and motions of biological machinery frozen in action. DNA and cellular machinery is still far too complex to simulate fully, so CEM is one of the best available tools for studying it. For those reasons, and the fact that the percentage of what folks refer to as "junk dna" has steadily dwindled over the years due to discovery of these functions, it's reasonable to expect we'll discover more.
Agreed. There's something about the gestational phase, aka nanotechnological self-assembly, that surely requires at least a few lines of code(!) and which otherwise is never used again -- until passed on to the next generation. Probably a good bet that the "repetitive elements" are accumulated lines of code for all successive phases of fetal development, from single-celled organism to two, to four, etc until all echoes of evolution are replayed and the present species emerges. "Junk," indeed.
> Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP.
I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
Additionally, being able to tell at a glance what sort of data functions require and return saves a ton of reading and thinking about libraries and even code I wrote myself last week. And the benefits of Cargo in quickly building complex projects cannot be overstated.
All that considered, I find Rust to be quite a bit faster to write software in than C++, which is probably it's closest competitor in terms of capabilities. This can be seen at a macro scale in how quickly the Rust library ecosystem has grown.
I disagree. I've been writing heavy Rust for 5 years, and there are many tasks for which what you say is true. The problem is Rust is a low level language, so there is often ceremony you have to go through, even if it doesn't give you value. Simple lifetimes aren't too bad, but between that and trait bounds on some one else traits that have 6 or 7 associated types, it can get hairy FAST. Then consider a design that would normally have self referential structs, or uses heavy async with pinning, async cancellation, etc. etc.
I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust. I think it is worth it (at least for me, on my personal time), but I can see where a business might find differently for many types of programs.
> The problem is Rust is a low level language so there is often ceremony you have to go through, even if it doesn't give you value.
As is C++ which I compared it to, where there is even more boilerplate for similar tasks. I spent so much time working with C++ just integrating disparate build systems in languages like Make and CMake which just evaporates to nothing in Rust. And that's before I even get to writing my code.
> I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust.
I'm not saying there's no cost. I'm saying that in my experience (about 4 years into writing decently sized Rust projects now, 20+ years with C/C++) the cost is lower than C++. C++ is one of the worst offenders in this regard, as just about any other language is easier and faster to write software in, but also less capable for odd situations like embedded, so that's not a very high bar. The magical part is that Rust seems just as capable as C++ with a somewhat lower cost than C++. I find that cost with Rust often approaches languages like Python when I can just import a library and go. But Python doesn't let me dip down to the lower level when I need to, whereas C++ and Rust do. Of the languages which let me do that, Rust is faster for me to work in, no contest.
So it seems like we agree. Rust often approaches the productivity of other languages (and I'd say surpasses some), but doesn't hide the complexity from you when you need to deal with it.
> I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
I was responding to "as any other language". Compared to C++, yes, I can see how iteration would faster. Compared to C#/Go/Python/etc., no, Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
> Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
Sometimes specific tasks in Rust require a little extra effort - like interacting with the file picker from WASM required me to write an async function. In embedded sometimes I need to specify an allocator or executor. Sometimes I need to wrap state that's used throughout the app in an Arc(Mutex()) or the like. But I find that there are things like that in all languages around the edges. Sometimes when I'm working in Python I have to dip into C/C++ to address an issue in a library linked by the runtime. Rust has never forced me to use a different language to get a task done.
I don't find the need to specify types to be a particular burden. If anything it speeds up my development by making it clearer throughout the code what I'm operating on. The only unsafe I've ever had to write was for interacting with a GL shader, and for binding to a C library, just the sort of thing it's meant for, and not really possible in those other languages without turning to C/C++. I've always managed to use existing datastructures or composites thereof, so that helps. But that's all you get in languages like C#/Go/Python/etc. as well.
The big change for me was just learning how to think about and structure my code around data lifetimes, and then I got the wonderful experience other folks talk about where as soon as the code compiles I'm about 95% certain it works in the way I expect it to. And the compiler helps me to get there.
That's a fairly accurate idea of it. Some folks complain about Rust's syntax looking too complex, but I've found that the most significant differences between Rust and C/C++ syntax are all related to that metadata (variable types, return types, lifetimes) and that it's not only useful for the compiler, but helps me to understand what sort of data libraries and functions expect and return without having to read through the entire library or function to figure that out myself. Which obviously makes code reuse easier and faster. And similarly allows me to reason much more easily about my own code.
The only thing I really found weird syntactically when learning it was the single quote for lifetimes because it looks like it’s an unmatched character literal. Other than that it’s a pretty normal curly-braces language, & comes from C++, generic constraints look like plenty of other languages.
Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.
Agreed. In practice Rust feels very much like a rationalized C++ in which 30 years of cruft have been shrugged off. The core concepts have been reduced to a minimum and reinforced. The compiler error messages are wildly better. And the tooling is helpful and starts with opinionated defaults. Which all leads to the knock-on effect of the library ecosystem feeling much more modular, interoperable, and useful.
Someone always crawls out of the woodwork to repeat this supposed "fact" which hasn't been true for the entire half-century it's been repeated. Jim Keller (designer of most of the great CPUs of the last couple decades) gave a convincing presentation several years ago about just how not-true it is: https://www.youtube.com/watch?v=oIG9ztQw2Gc Everything he says in it still applies today.
Intel struggled for a decade, and folks think that means Moore's law died. But TSMC and Samsung just kept iterating. And hopefully Intel's 18a process will see them back in the game.
During the 1990s (and for some years before and after) we got 'Dennard scaling'. The frequency of processors tended to increase exponentially, too, and featured prominently in advertising and branding.
I suspect many people conflated Dennard scaling with Moore's law and the demise of Dennard scaling is what contributes to the popular imagination that Moore's law is dead: frequencies of processors have essentially stagnated.
Yup. Since then we've seen scaling primarily in transistor count, though clock speed has increased slowly as well. Increased transistor count has led to increasingly complex and capable instruction decode, branch prediction, out of order execution, larger caches, and wider execution pipelines in attempt to increase single-threaded performance. We've also seen the rise of embarrassingly parallel architectures like GPUs which more effectively make use of additional transistors despite lower clock speeds. But Moore's been with us the whole time.
Chiplets and advanced packaging are the latest techniques improving scaling and yield keeping Moore alive. As well as continued innovation in transistor design, light sources, computational inverse lithography, and wafer scale designs like Cerebras.
Yes. Increase in transistor count is what the original Moore's law was about. But during the golden age of Dennard scaling it was easy to get confused.
Agreed. And specifically Moore's law is about transistors per constant dollar. Because even in his time, spending enough could get you scaling beyond what was readily commercially available. Even if transistor count had stagnated, there is still a massive improvement from the $4,000 386sx Dad somehow convinced Mom to greenlight in the late 80s compared to a $45 Raspberry Pi today. And that factors into the equation as well.
Of course, feature size (and thus chip size) and cost are intimately related (wafers are a relatively fixed cost). And related as well to production quantity and yield (equipment and labor costs divide across all chips produced). That the whole thing continues scaling is non-obvious, a real insight, and tantamount to a modern miracle. Thanks to the hard work and effort of many talented people.
The way I remember it, it was about the transistor count in the commercially available chip with the lowest per transistor cost. Not transistor count per constant dollar.
Wikipedia quotes it as:
> The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
But I'm fairly sure, if you graph how many transistors you can buy per inflation adjusted dollar, you get a very similar graph.
Yes. I think you're probably right about phrasing. And transistor count per inflation adjusted dollar is the unit most commonly used to graph it. Similar ways to say the same thing.
> 90% of games work fine, but many have weird bugs like crashing when you Alt-Tab out.
This isn't particularly linux-y of an issue. I've had the same sort of behavior in numerous games on Windows, up to and including crashing the graphics driver when alt-tabing out of a full screen game. Seems to be something gamedevs are not commonly testing, and perhaps difficult to defend against when a game is directly interacting with the GPU.
> Seems to be something gamedevs are not commonly testing, and perhaps difficult to defend against when a game is directly interacting with the GPU.
I can guarantee you any gamedev worth his salt will have used alt-tab at some point in the game's development on windows. It's an incredibly common hotkey to use, and the devs very likely have multiple ides, notepads, image editing software running concurrently. You seem to be trying really hard.
> when a game is directly interacting with the GPU.
Most devs are using cross platform graphics APIs. OpenGL/DirectX/Vulkan. Alt-tab breaking is likely an OS issue.
> I can guarantee you any gamedev worth his salt will have used alt-tab at some point in the game's development on windows.
Not exactly a repeatable testing framework, that.
> You seem to be trying really hard.
I almost strained a typing finger! /s lol
> Most devs are using cross platform graphics APIs. OpenGL/DirectX/Vulkan. Alt-tab breaking is likely an OS issue.
All the OSes seem to suffer from it similarly. More likely an issue that even the cross-platform graphics APIs rely heavily on shared memory buffers and most games depend on code written in languages which aren't strictly memory safe. Sharing a memory buffer between CPU and GPU (or even just multiple CPU cores) is quite difficult to do safely under all possible circumstances without proper language support.
Perhaps you're not a software developer. Most devs understand that there's a big difference between "it worked for me a few times on my development workstation" and "it's routinely tested in all possible configurations under a variety of circumstances as part of a test harness or CI/CD process".
In fairness to game devs, alt-tab'ing out of a running game would be a challenge for many testing frameworks as it's not something you can do at compile time, requires running the game for a period of time (CI servers don't typically have GPUs), requires some sort of keyboard/mouse automation, and interaction with the underlying OS in addition to the game.
Issues which aren't added to some sort of test suite/CI tend to creep back in to codebases. Especially rapidly developed codebases like games. And threading issues are notoriously challenging to reproduce. Hopefully that helps you understand the difference.
Many game devs develop on windows and for good reason in that most of their customer base are there, plus the stability of drivers there.
Your assumption of what you've been taught in compsci circles with many resources at their disposal does not hold up in places in which fast iterations are required, and with little time to set up testing frameworks because as you said they're hard to test.
I would say most game devs. That doesn't change a thing I've said.
> plus the stability of drivers there
Driver code for Nvidia, AMD, and Intel are all shared across Windows and Linux these days. Have been for years. AMD's even made a point of pulling improvements from the Linux drivers back into the Windows drivers.
> Your assumption of what you've been taught in compsci circles with many resources at their disposal does not hold up in places in which fast iterations are required
I have developed games. I write libraries useful for gamedev. Every game developer I know uses version control and most use CI. Contrary to your opinion, version control and CI help to iterate faster with greater confidence, and don't require lots of resources, just a GitHub account and 5 minutes.
> Every game developer I know uses version control and most use CI.
Funny because I know game developers who don't use CI. And I've developed games too. It's not as universal as you claim it to be. It was never about arguing the effectiveness of CI.
I'm sure they're out there. A bit like publicly declaring you drive without wearing a seatbelt.
> It was never about arguing the effectiveness of CI.
The original point was that alt-tab'ing out of a game can result in unexpected behavior on any OS. Shared memory buffers in graphics APIs are the most likely culprit. And all graphics APIs on all OSes use them. I'm still not sure what exactly you're trying to argue about that.
That's how working with junior team members or open source project contributors goes too. Perhaps that's the big disconnect. Reviewing and integrating LLM contributions slotted right into my existing workflow on my open source projects. Not all of them work. They often need fixing, stylistic adjustments, or tweaking to fit a larger architectural goal. That is the norm for all contributions in my experience. So the LLM is just a very fast, very responsive contributor to me. I don't expect it to get things right the first time.
But it seems lots of folks do.
Nevertheless, style, tweaks, and adjustments are a lot less work than banging out a thousand lines of code by hand. And whether an LLM or a person on the other side of the world did it, I'd still have to review it. So I'm happy to take increasingly common and increasingly sophisticated wins.
Junior's grow into mids, and eventually into seniors. OSS contributor's eventually learn the codebase, you talk to them, you all get invested in the shared success of the project and sometimes you even become friends.
For me, personally, I just don't see the point of putting that same effort into a machine. It won't learn or grow from the corrections I make in that PR, so why bother? I might as well have written it myself and saved the merge review headache.
Maybe one day it'll reach perfect parity of what I could've written myself, but today isn't that day.
I wonder if that difference in mentality is a large part of the pro- vs anti-AI debate.
To me the AI is a very smart tool, not a very dumb co-worker. When I use the tool, my goal is for _me_ to learn from _its_ mistakes, so I can get better at using the tool. Code I produce using an AI tool is my code. I don't produce it by directly writing it, but my techniques guide the tool through the generation process and I am responsible for the fitness and quality of the resulting code.
I accept that the tool doesn't learn like a human, just like I accept that my IDE or a screwdriver doesn't learn like a human. But I myself can improve the performance of the AI coding by developing my own skills through usage and then applying those skills.
> It won't learn or grow from the corrections I make in that PR, so why bother?
That does not match my experience. As the codebases I've worked with LLMs on become more opinionated and stylized, it seems to to a better job of following the existing work. And over time the models have absolutely improved in terms of their ability to understand issues and offer solutions. Each new release has solved problems for me that the previous ones have struggled with.
Re: interpersonal interactions, I don't find that the LLM has pushed them out or away. My projects still have groups of interested folk who talk and joke and learn and have fun. What the LLMs have addressed for me in part is the relative scarcity of labor for such work. I'm not hacking on the Linux Kernel with 10,000 contributors. Even with a dozen contributors, the amount of contributed code is relatively low and only in areas they are interested in. The LLM doesn't mind if I ask it to do something super boring. And it's been surprisingly helpful in chasing down bugs.
> Maybe one day it'll reach perfect parity of what I could've written myself, but today isn't that day.
Regardless of whether or not that happens, they've already been useful for me for at least 9 months. Since O3, which is the first one that really started to understand Rust's borrow checker in my experience. My measure isn't whether or not it writes code as well as I do, but how productive I am when working with it compared to not. In my measurements with SLOCCount over the last 9 months, I'm about 8x more productive than the previous 15 years without (as long as I've been measuring). And that's allowed me to get to projects which have been on the shelf for years.
Same! And then they made new eDRAM for a hot minute as part of Crystal Well. It'd be fun to see them get back into the game in a serious way, but their on-again-off-again attitude toward dGPUs does not give me confidence in their ability to execute on such long-term plans.
No, the 50+ years of ridiculously unavoidable memory corruption errors have done more to disparage C than anyone working in another language.
reply