Hacker Newsnew | past | comments | ask | show | jobs | submit | NitpickLawyer's commentslogin

Truecrypt had that a decade+ ago.

Not sure if you know the history behind it, but look up Paul Le Roux

Also would recommend the book called The Mastermind by Evan Ratliff


imo Paul Le Roux has nothing to do with TrueCrypt

He wrote the code base that it is based on in combination with code he stole. The name is also based on an early name he chose for the software.

Whether he was involved in the organization and participated in it, is certainly up for debate, but it's not like he would admit it.

https://en.wikipedia.org/wiki/E4M


> You are confidently incorrect.

No, he's not. Dragon is using CotS, non rad-hardened CPUs. And it's rated to carry humans to space.

> AWST: So, NASA does not require SpaceX to use radiation-hardened computer systems on the Dragon?

John Muratore: No, as a matter of fact NASA doesn't require it on their own systems, either. I spent 30 years at NASA and in the Air Force doing this kind of work. My last job was chief engineer of the shuttle program at NASA, and before that as shuttle flight director. I managed flight programs and built the mission control center that we use there today.

On the space station, some areas are using rad-hardened parts and other parts use COTS parts. Most of the control of the space station occurs through laptop computers which are not radiation hardened.

> Q: So, these flight computers on Dragon – there are three on board, and that's for redundancy?

A: There are actually six computers. They operate in pairs, so there are three computer units, each of which have two computers checking on each other. The reason we have three is when operating in proximity of ISS, we have to always have two computer strings voting on something on critical actions. We have three so we can tolerate a failure and still have two voting on each other. And that has nothing to do with radiation, that has to do with ensuring that we're safe when we're flying our vehicle in the proximity of the space station.

I went into the lab earlier today, and we have 18 different processing units with computers in them. We have three main computers, but 18 units that have a computer of some kind, and all of them are triple computers – everything is three processors. So we have like 54 processors on the spacecraft. It's a highly distributed design and very fault-tolerant and very robust.

[1] - https://aviationweek.com/dragons-radiation-tolerant-design


> Dragon is using CotS, non rad-hardened CPUs. And it's rated to carry humans to space.

Those are not independent facts. They put the hardware inside, behind the radiation shielding they use to keep the astronauts safe. It's why regular old IBM laptops work on the Space Station too. That kind of shielding is going to blow your mass budget if you use it on these satellites.

SpaceX, which prefers COTS components when it can use them, still went with AMD Versal chips for Starlink. Because that kind of high performance, small process node hardware doesn't last long in space otherwise (phone SoC-based cubesats in LEO never lasted more than a year, and often only a month or so).


> They put the hardware inside,

Which is exactly how you'd do a hypothetical dc in space. Come on, you're arguing for the sake of arguing. CotS works. This is not an issue.

> That kind of shielding is going to blow your mass budget

SpX is already leading in upmass by a large margin. Starship improves mass to orbit. Again, this is a "solved" issue.

There are other problems in building space DCs. Rad hardening is not one of them. AI training is so fault tolerant already that this was never an issue.


None of the discussed designs include radiation shielding like that. Nobody is considering doing it that way, because the math really really doesn’t work out (instead of unshielded, where it just doesn’t work out).

A cosmic ray striking a chip doesn’t cause a bitflip - it blows out the whole compute unit and permanently disables it. It is more like a hand grenade going off.


> AI training is so fault tolerant already that this was never an issue.

Such nonsense.


Between fp nondeterminism, fp arithmetic, async gradient updates, cuda nondeterminism, random network issues, random nodes failing and so on, bitflip is the last of your concerns. SGD is very robust on noise. That's why it works with such noisy data, pipelines, compute and so on. Come on! This thread is having people find the most weird hills to die on, while being completely off base.

Carrying humans to space is not the same use case as spending long periods of time in orbit.

Dragon spends 6mo+ in orbit regularly. I have no idea what's happening in this thread, but it seems everyone is going insane. People don't even know what they're talking about, but they keep on bringing bad arguments. I'm out.

> Dragon spends 6mo+ in orbit regularly.

... hooked up to the ISS, with humans in attendance to fix anything that goes wrong... not doing very much.

It's akin to the difference between a boat moored up in a port, and an autonomous drone in the middle of the Pacific. Aside from that, satellites have to maneuver in orbit (to stay in the correct orbit, and increasingly to avoid other satellites). Hefting around additional kgs of shielding makes that more difficult, and costly in terms of propellant, which is very important for the lifetime of a satellite.


You're replying to a bot, fyi :)

Nope! https://www.linkedin.com/in/philipsorensen

But as a non-native english speaker, I do use AI to help me formulate my thoughts more clearly. Maybe this is off putting? :)


Yes, that's definitely a bad idea because the community picks up on it and dismisses the entire comment set as generated. Generated comments aren't allowed on HN, and readers are super-sensitive about this these days.

The non-native speaker point is understandable, of course, but you're much better off writing in your own voice, even if a few mistakes sneak in (who cares, that's fine!). Non-native speakers are more than welcome on HN.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Comment 1: https://news.ycombinator.com/item?id=46873799 2026-02-03T17:12:55 1770138775

Comment 2: https://news.ycombinator.com/item?id=46873809 2026-02-03T17:13:40 1770138820

Comment 3: https://news.ycombinator.com/item?id=46873820 2026-02-03T17:14:25 1770138865

All detailed comments in different threads posted exactly 45 seconds apart, unless the HN timestamps aren't accurate.

That's very impressive if the account is not "generated comments", even using speech-to-text via AI. I'll leave it at that.


Appreciate it! I should clarify that it's not just grammatical. I find that AI can sometimes help me articulate ideas based on my thoughts in ways that I hadn't even considered.

Ok, but please don't do it anymore. It's not what we want here, will lead to an increasingly hostile reception from HN users. The community here feels very strongly about reserving the space for human-to-human interaction, discussion, thought, language, etc.

If it weren't for the single em-dash (really an en-dash, used as if it were an em-dash), how am I supposed to know that?

And at the end of the day, does it matter?


Some people reply for their own happiness, some reply to communicate with another person. The AI won't remember or care about the reply.

"Is they key unlock here"

Yeah, that hits different.

> If AI can program, why does it matter if it can play Chess using CoT when it can program a Chess Engine instead?

Heh, we really did come full circle on this! When chatgpt launched in dec22 one of the first things that people noticed is that it sucked at math. Like basic math 12 + 35 would trip it up. Then people "discovered" tool use, and added a calculator. And everyone was like "well, that's cheating, of course it can use a calculator, but look it can't do the simple addition logic"... And now here we are :)


IMO there's an expectation for baseline intelligence. I don't expect an "AGI" model to beat Magnus Carlsen out of the box but it should be able to do basic grade school level arithmetic and play chess at a complete beginner level without resorting to external tools.

The problems with your take (and others like it) are manyfold.

First, there are some "smells" that I noticed. You say that LLMs hallucinate APIs and in another comment (brief skim of your history to make sure it's worth replying) you say something about chatting with an LLM. If you're "using" them in a chat interface, that's already 1+year old tech, and you should know that noone here talks about that. We're talking about LLM assisted coding using harnesses that make it possible and worth your time. Another smell is that you assert that LLMs only work for languages that are popular. While it's true they work best in those cases, as of ~1 y ago, it's also true that they can work even on invented languages. So I take every "i work in this very niche field" with a grain of salt nowadays.

Second, the overall problem with "it doesn't work for me" is that it's an useless signal. Both in general and in particular. If I see a "positive post", I can immediately test it. If it works, great, I can include it in my toolbox. If it doesn't work, I can skip it. But with posts like yours, I can't do anything. You haven't provided any details, and even if you did, it would still be so dependant on your particular problem, with language, env, etc. that it would make the signal very weak for anyone else that doesn't have your particular problem.

I am actually curious, if you can share, what's your setup. And perhaps an example of things you couldn't do. Perhaps we can help.

The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

Having said that, here's my take: With small provisions made for extreme niche fields (so extreme that it would place you in 0.0x% of coders, making the overall point moot anyway) I think people reporting 0 success are either wrong or using it wrong. It's impossible for me to believe that everything that I can achieve is so out of phase with whatever you are trying to achieve as to you getting literally 0 success. And I'm sick and tired of hearing this "oh it works for trivial tasks". No. It works reliably and unattended mostly for trivial tasks, but it can also work in very advanced niches. And there's plenty of public examples already for this - things like kernel optimisation, tensor libraries, cuda code, and so on. These are not "amateur" topics by any stretch of the word. And no, juniors can't one shot this either. I say this after 25+years doing this: there are plenty of times where I'm dumbstruck by something working first try. And I can't believe I'm the only one.


I use the chat interface by default because it is the only way I have felt that I am gaining any productivity at all. Letting LLMs waste time probing for files and executing their atrocities on my codebase has only resulted in lost time. Not for lack of trying; I have set up Codex and Claude Code environments, multiple times. I have wasted entire days trying to configure the setup and get something that provides value to me, three times last year - once with an early release of CC, once with Codex's release, and once again to retry them with GPT 5.2 and Opus 4.5.Every attempt ended in a complete failure to justify the time invested.

> The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

This very thread is about hype. The post I originally replied to suggests that developers are in stages of grief about LLMs. That we are traversing denial, anger, and depression, before our inevitable acceptance. It is utterly tiring to be subjected to this day in, day out, in every avenue of public discourse about the field. Of course I have grievances with the hype. Of course I don't appreciate being told I'm in denial and that everything has changed. The only thing that has changed is that LLM-generated articles are all over HN and ShowHN is polluted with a very high quantity of very low quality content.

> Second, the overall problem with "it doesn't work for me" is that it's an useless signal.

The signal is not for the true believers. People who have not succumbed to the hype may find value in knowing that they are not alone. If one person can't make use of LLMs, while everyone around them is hyping them up, it may make that person feel like they are being doing something wrong and being left behind. But if people push back against the hype, they will know that they are not alone, and that maybe it isn't actually worth investing entire workdays into trying to find the magical configuration of .md files that turns Claude Code from 0.5x productivity to 10x productivity.

To be clear, I'm not really in the market for advice on "holding it right". If I find myself being left behind in reality, I will continue giving the tooling another shot until I get it right. I spend most of my life coding, and have so many ambitious projects I wish to bring into the world and not enough time to do them all; I will relentlessly pursue a productivity increase if and when it becomes available. As it is, though, I have seen zero evidence that I am actually being left behind, and am not currently interested in trying again at the present time.


  This very thread is about hype. 
Hype doesn't explain how everyone on my dev team no longer writes 95% of the code we push to production.

I have antigravity in its own account and that has worked pretty well so far. I also use devcontainers for the cli agents and that has also worked out well. It's one click away in my normal dev flow (I was using this anyway before for python projects).

> AI agents don't seem to have sped up the corporate process at all.

I think there's a parallel here between people finding great success with coding agents vs. people swearing it's shit. But when prodded it turns out that some are working on good code bases while others work on shit code bases. It's probably the same with large corpos. Depending on the culture, you might get such convoluted processes and so much "assumed" internal knowledge that agents simply won't work ootb.


While I agree that the MCP craze was a bit off-putting, I think that came mostly from people thinking they can sell stuff in that space. If you view it as a protocol and not much else, things change.

I've seen great improvements with just two MCP servers: context7 and playwright. The first is great on planning sessions and leads to better usage of new-ish libraries, and the second is giving the model a feedback loop. The advantage is that they work with pretty much any coding agent harness you use. So whatever worked with cursor will work with cc or opencode or whatever else.


My main issue with Playwright (and chrome-devtools) is that they pollute the context with a massive amount of stuff.

What I want is a Skill that leverages a normal CLI executable that gives the LLM the same capabilities of browser use.


> I have also given them examples of good answers: terse and to the point

Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.


The biggest advantage by far is the data they collect along the way. Data that can be bucketed to real devs and signals extracted from this can be top tier. All that data + signals + whatever else they cook can be re-added in the training corpus and the models re-trained / version++ on the new set. Rinse and repeat.

(this is also why all the labs, including some chinese ones, are subsidising / metoo-ing coding agents)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: