Everyone should read this comment, it does a really eloquent job explaining the situation.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
Why is it the USA doesn't have their own bug bounty program for non-DOD systems? Like, sure, they have a bounty for vulns in govt systems. But why not accept vulns for any system, and offer to pay more than anyone else? It would give them a competitive advantage (offensive & defensive) over every other nation. End one experimental weapons program (or whatever garbage DOD spends its obscene budget on) and suddenly we're not cyber-sucky anymore.
I think you are confusing bug bounty programs with espionage and cyber warfare. The USA definitely accepts vulnerabilities for any system (or at least target systems), paying good money for them if it is an attack chain, giving them that competitive edge you mention. They have at least one military organization over this exact thing (USCYBERCOM) and realistically other orgs to include the intelligence community.
There are no bug bounties on "any" system because bug bounties are part of programs to fix bugs, not exploit them. They therefore have bug bounties for their own systems, as those are the ones they would be interested in improving. What you described, which they definitely do, is cyber espionage, and those bugs are submitted through different channels than a bug bounty.
But that's the thing, I think they specifically need a non-IC program. If I'm a white-hat, grey-hat, or a somewhat cagey black-hat, I'm not gonna reach out to a shadowy organization with a penchant for extrajudicial surveillance, torture & killing to make $50k on a bug. Sure, you can try your hand at selling them an exploit that won't get revealed. But if only you and The Company know about the bug, and it could mean the upside in a potential war (or just a feather in an agency head's cap), why would The Company keep you alive and able to talk about it? OTOH, if the program you're reporting to doesn't have a track record of illegal activity, personally I'd feel a lot safer reporting there. And ideally their mission would be to patch the bug and not hold onto it. But we get to patch first, so it's still our advantage.
Because collecting and gatekeeping vulns so you can attack other countries is bad manners.
If you look up some of the Snowden testimonies, it's implied USA at least had access to some 0-days at the past, but nobody admitted to it, because it just bad national politics.
Even if USA is doing dog-shit in politics now, openly admitting to collecting cyber-weapons (instead of doing it silently) is just an open invitation to condemnation
From being in the trenches a couple of decades ago, they do. They just don't disclose after they pay the bounty. They keep them to themselves. I knew one guy (~2010?) making good money just selling exploits (to a 3-letter agency) that disabled the tally lamps on webcams so the cams could be enabled without alerting the subject.
Even though I agree with the conclusion with respect to pricing, I don't think this comment is generally accurate.
Most* valuable exploits can be sold on the gray market - not via some bootleg forum with cryptocurrency scammers or in a shadowy back alley for a briefcase full of cash, but for a simple, taxed, legal consulting fee to a forensics or spyware vendor or a government agency in a vendor shaped trenchcoat, just like any other software consulting income.
The risk isn't arrest or scam, it's investment and time-value risk. Getting a bug bounty only requires (generally) that a bug can pass for real; get a crash dump with your magic value in a good looking place, submit, and you're done.
Selling an exploit chain on the gray market generally requires that the exploit chain be reliable, useful, and difficult to detect. This is orders of magnitude more difficult and is extremely high-risk work not because of some "shady" reason, but because there's a nonzero chance that the bug doesn't actually become useful or the vendor patches it before payout.
The things you see people make $500k for on the gray market and the things you see people make $20k for in a bounty program are completely different deliverables even if the root cause / CVE turns out to be the same.
*: For some definition of most, obviously there is an extant "true" crappy cryptocurrency forum black market for exploits but it's not very lucrative or high-skill compared to the "gray market;" these places are a dumping ground for exploits which are useful only for crime and/or for people who have difficulty doing even mildly legitimate business (widely sanctioned, off the grid due to personal history, etc etc.)
I see that someone linked an old tptacek comment about this topic which per the usual explains things more eloquently, so I'll link it again here too: https://news.ycombinator.com/item?id=43025038
The lack of CUDA support on AMD is absolutely not that AMD "couldn't" (although I certainly won't deny that their software has generally been lacking), it's clearly a strategic decision.
Supporting CUDA on AMD would only build a bigger moat for NVidia; there's no reason to cede the entire GPU programming environment to a competitor and indeed, this was a good gamble; as time goes on CUDA has become less and less essential or relevant.
Also, if you want a practical path towards drop-in replacing CUDA, you want ZLUDA; this project is interesting and kind of cool but the limitation to a C subset and no replacement libraries (BLAS, DNN, etc.) makes it not particularly useful in comparison.
They've already ceded the entire GPU programming environment to their competitor. CUDA is as relevant as it always has been.
The primary competitors are Google's TPU which are programmed using JAX and Cerebras which has an unrivaled hardware advantage.
If you insist on an hobbyist accessible underdog, you'd go with Tenstorrent, not AMD. AMD is only interesting if you've already been buying blackwells by the pallet and you're okay with building your own inference engine in-house for a handful of models.
Even disregarding CUDA, NVidia has had like 80% of the gaming market for years without any signs of this budging any time soon.
When it comes to GPUs, AMD just has the vibe of a company that basically shrugged and gave up. It's a shame because some competition would be amazing in this environment.
Nvidia has a sprawling APU family in the Tegra series of ARM APUs, that span machines from the original Jetson boards and the Nintendo Switch all the way to the GB10 that powers the DGX Spark and the robotics-targeted Thor.
The CPUs in their SOCs were not up to snuff for a non-portable game console until very recently. They used (and largely still do I believe) off the shelf ARM Cortex designs. The SOC fabric is their own, but the cores are standard.
In performance even the aging Zen2 would demolish the best Tegra you could get at the time.
You should note that the Switch, the only major handheld console for the last 10 years, is the only one using a Tegra.
And from everything I've heard Nvidia is a garbage hardware partner who you absolutely don't want to base your entire business on because they will screw you. The consoles all use custom AMD SOCs, if you're going to that deep level of partnering you'd want a partner who isn't out to stab you.
There has been a rumor that some OEMs will releasing gaming oriented laptops with Nvidia N1X Arm CPU + some form of 5070-5080 ballpark GPU, obviously not on x86 windows so it would be pushing the latest compatibility layer.
PlayStation and Xbox are two extremely low-margin, high volume customers. Winning their bid means shipping the most units of the cheapest hardware, which AMD is very good at.
Agreed on ZLUDA being the practical choice. This project is more impressive as a "build a GPU compiler from scratch" exercise than as something you'd actually use for ML workloads. The custom instruction encoding without LLVM is genuinely cool though, even if the C subset limitation makes it a non-starter for most real CUDA codebases.
ZLUDA doesn't have full coverage though and that means only a subset of cuda codebases can be ported successfully - they've focused on 80/20 coverage for core math.
Completely different layer; tinygrad is a library for performing specific math ops (tensor, nn), this is a compiler for general CUDA C code.
If your needs can be expressed as tensor operations or neural network stuff that tinygrad supports, might as well use that (or one of the ten billion other higher order tensor libs).
Claude is doing the decompilation here, right? Has this been compared against using a traditional decompiler with Claude in the loop to improve decompilation and ensure matched results? I would think that Claude’s training data would include a lot more pseudo-C <-> C knowledge than MIPS assembler from GCC 2.7 and C pairs, and even if the traditional decompiler was kind of bad at N64 it would be more efficient to fix bad decompiler C than assembler.
It's wild to me that they wouldn't try this first. Feeding the asm directly into the model seems like intentionally ignoring a huge amount of work that has gone in traditional decompilation. What LLMs excel at (names, context, searching in high-dimensional space, making shit up) is very different from, e.g. coming up with an actual AST with infix expressions that represents asm code.
I've been doing some decompilation with Ghidra. Unfortunately, it's of a C++ game, which Ghidra isn't really great at. And thus Claude gets a bit confused about it all too. But all in all: it does work, and I've been able to reconstruct a ton of things already.
One of the other PhD students in my department has an NDSS 2026 paper about combining the strengths of both LLMs and traditional decompilers! https://lukedramko.github.io/files/idioms.pdf
Not Claude, but there are open-weight LLMs trained specifically on Ghidra decomp and tested on their ability to help reverse engineers make sense of it:
Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.
For UI based manual reversing of things that run on an OS, IDA is quite superior; it has really good pattern matching and is optimized on this use case, so combined with the more ergonomic UI, it’s way way faster than Ghidra and is well worth the money (provided you are making money off of RE). The IDA debugger is also very fast and easy to use compared to Ghidra’s provided your target works (again, anything that runs on an OS is probably golden here).
For embedded IDA is very ergonomic still, but since it’s not abstract in the way Ghidra is, the decompiler only works on select platforms.
Ghidra’s architecture lends itself to really powerful automation tricks since you can basically step through the program from your plugin without having an actual debug target, no matter the architecture. With the rise of LLMs, this is a big edge for Ghidra as it’s more flexible and easier to hook into to build tools.
The overall Ghidra plugin programming story has been catching up; it’s always been more modular than IDA but in the past it was too Java oriented to be fun for most people, but the Python bindings are a lot better now. IDA scripting has been quite good for a long time so there’s a good corpus of plugins out there too.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
reply