Hacker Newsnew | past | comments | ask | show | jobs | submit | alexrp's commentslogin

Binary Ninja deserves a mention in these threads: https://binary.ninja

I've used IDA, Ghidra, and Binary Ninja a lot over the years. At this point I much prefer Binary Ninja for the task of building up an understanding of large binaries with many thousands of types and functions. It also doesn't hurt that its UI/UX feel like something out of this century, and it's very easy to automate using Python scripts.


One large-ish past thread and a few tinies, for anyone curious:

Binary Ninja – an interactive decompiler, disassembler, debugger - https://news.ycombinator.com/item?id=41297124 - Aug 2024 (1 comment)

Binary Ninja – 4.0: Dorsai - https://news.ycombinator.com/item?id=39546731 - Feb 2024 (1 comment)

Binary Ninja 3.0: The Next Chapter - https://news.ycombinator.com/item?id=30109122 - Jan 2022 (1 comment)

Binary Ninja – A new kind of reversing platform - https://news.ycombinator.com/item?id=12240209 - Aug 2016 (56 comments)


Yep, it's cheaper than IDA and I like the UI better. Also I love that it's made by game hacking folks (my clique).

I believe the Binja folk originate from the CTF folk.

Wow, they made it free. The last time I used it I bought a $100 subscription for non commercial use.

Not sure why you would use Binary Ninja free version, there are so many limitations, like IDA free the platform support is very low.

BN is nice if someone is paying for it, but has too many limitations especially for the most common use case which is security.

What are the limitations?

No shellcode decoding, no plugin support and rather limited IR.

> No shellcode decoding

Can't speak to this as I don't RE for security purposes, but:

> no plugin support and rather limited IR.

this I'm profoundly confused by. BN has multiple IRs that are easily accessible both in the UI and to scripts. And it certainly has a plugin system too.


Not in the free version

Binary Ninja definitely has plugins?

Binary Ninja seems way ahead in terms of UX, as a hobby reverser. It's my default as well.

It's basically "VS Code" UX with dark mode. Come on, is this some sort of joke? Serious question.

I'm curious what you would consider better UX?

We have actually been more inspired by Jetbrains lately than VS Code. Take that for what you will.

We do try to pick simple sane defaults while still allowing enough customization to adapt to different workflows.

Actually working on a startup wizard for first time users if they want to more closely replicate the feel of other RE tools since muscle memory is hard to break.


Last time I used them - Ghidra, and to some extent IDA, had UXes that were very difficult for new users to pick up and frequently deviate from standard expectations for modern desktop apps because they have two decades of baggage. In contrast binary ninja is very easy to explore and has many fewer surprises.

In particularly I like their approach of creating modern IR pipeline.


This is not really related

The Linux free trial version is a 400MB .zip file including a 255.2MB "binaryninja" shared binary

https://github.com/Vector35/binaryninja-api/releases/downloa...



i've heard there is a way to get the source code for free

what's your point?

It sounds like you expected 1.0 stability from a language that isn't 1.0.

> I thought it was stable enough initially but they completely broke fuzz testing feature and didn’t fix it.

From the 0.14.0 release notes:

> Zig 0.14.0 ships with an integrated fuzzer. It is alpha quality status, which means that using it requires participating in the development process.

How could we possibly have been more explicit?

Fuzzing will be a major component of Zig's testing strategy in the long term, but we clearly haven't had the time to get it into shape yet. But we also didn't claim to have done!

> Also some things like stack traces were broken in small ways in zig. It would report wrong lines in stack traces when compiling with optimizations. Also wasn’t able to cleanly collect stack traces into strings in production build.

I mean, to be fair, most compiled languages can't give you 100% accurate source-level stack traces in release builds. But that aside, we have actually invested quite a lot of effort into std.debug in the 0.16.0 release cycle, and you should now get significantly better and more reliable stack traces on all supported platforms. If you encounter a case where you don't, file a bug.

> And recently saw they even moved the time/Instant API to some other place too. This kind of thing is just super annoying with seemingly no benefit. Could have left the same API there and re-used it from somewhere else. But no, have to make it “perfect”

I acknowledge that API churn can be annoying, but it would be weird not to aim for perfection prior to 1.0.


Makes sense, that is fair.

I was a bit too frustrated with all these changes and zig wasn’t the right choice for my particular use case then.


Just on this point:

> You mean like how Rust tried green threads pre-1.0? Rust gave up this one up because it made runtime too unwieldy for embedded devices.

The idea with making std.Io an interface is that we're not forcing you into using green threads - or OS threads for that matter. You can (and should) bring your own std.Io implementation for embedded targets if you need standard I/O.


Ok. But if your program assumes green threads and spawn like two million of them on target that doesn't support them, then what?

The nice thing about async is that it tells you threads are cheap to spawn. By making everything colourless you implicitly assume everything is green thread.


Most people would be better off waiting for the multiple RVA23 boards that are supposed to come out this year, at least if they don't want to be stuck running custom vendor distros. "RVA23 except V" at this price point and at this point in time is a pretty bad value proposition.

It's honestly a bit hard to understand why they bothered with this one. No hate for the Milk-V folks; I have 4 Jupiters sitting next to me running in Zig's CI. But hopefully they'll have something RVA23-compliant out soon (SpacemiT K3?).


> But hopefully they'll have something RVA23-compliant out soon (SpacemiT K3?).

A handful of developers already have access to SpacemiT K3 hardware, which is indeed RVA23 compliant and already runs Ubuntu 26.04.

geekbench: https://browser.geekbench.com/v6/cpu/16145076

rvv-bench: https://camel-cdr.github.io/rvv-bench-results/spacemit_x100/... (which as instruction throughput measurements and more)


This is around the performance of a Core 2 Duo, if I understand correctly?


The single core performance is roughly in the middle between Pi4 Cortex-A72 and Pi5 Cortex-A76.

It's slightly faster than a 3GHz Core 2 Dua in scalar single threaded performance, but it has 8 cores instead of two and more SIMD performance. There are also 8 additional SpacemiT-A100 cores with 1024-bit wide vectors, which are more like an additional accelerator.

The geekbench score is a bit lower than it should be, because at least three benchmarks are still missing SIMD acceleration on RISC-V (File Compression, Asset Compression, Ray Tracer), and the HTML5 browser test is also missing optimizations.

I'd estimate it should be able to get to the 500 range with comparable optimization to other architectures.

The Milk-V Titan mention in the original post is actually slightly faster in scalar performance, but has no RISC-V Vector support at all, which causes it's geekbench score to be way lower.


Do you happen to know how does one access/use those A100 cores?


No.

The problem is that you can't migrate threads between cores with different vector length.

The current ubuntu 26.04 image, that is installed, lists 16 cores in htop, but you can only run applications on the first 8 (e.g. taskset -c 10 fails). If you query whats running on the A100 cores you see things like a "kworker" processes.

I suspect that it should be possible to write a custom kernel module that runs on the A100s with the current kernel, but I'm not sure.

I expect it will definitely be possible to boot a OS only one the 8 A100 cores.

Well have to see if they manage to figure out how to add support for explicitly pinning user mode processes to the cores.

The ideal configuration would be to have everything run only on the X100s, but with an opt-in mechanism to run a program only on an A100 core.


That’s actually decent, thanks.


Something is odd here, the Core 2 Duo only has up to SSE 4.1, while the RVA23 instruction set is analogous to x64-v3. I find it hard to believe that the SpacemiT K3 matched a Core 2 duo single core score while leveraging those new instructions.

To wit the Geekbench 6.5.0 RISC-V preview has 3 files, 'geekbench6', 'geekbench_riscv64', and 'geekbench_rv64gcv', which are presumably the executables for the benchmark in addition to their supported instruction sets. This makes the score an unreliable narrator of performance, as someone could have run the other benchmarks and the posted score would not be genuine. And that's on top of a perennial remark that even the benchmark(s) could just not be optimized for RISC-V.


If it's anything like the k1, I wouldn't be surprised if Core 2 performance was on the table. The released specs are are ~Sandybridge-Haswell like, but those were architectures made by (at the time) the top CPU manufacturer and were carefully balanced architectures to maximize performance while minimizing transistors. SpaceMIT is playing on easy mode (they are making a chip on a ~2-4x smaller process node and aren't pioneering bleeding edge techniques), but balancing an out of order CPU is still tough, and it's totally possible to lose 50% of theoretical ipc if you don't have the memory bandwith, cache hierarchy, scheuling etc.


Cache issues add another layer here, if it's not the whole issue. Device tree patches for the K3 have 2 clusters of 4 cores with shared 4MB L2 cache per cluster. Core 2 Duo P8400 has 3MB L2 shared between 2 cores, and Sandybridge-Haswell have per core L2 and shared L3.


I don't think you'll be able to get away from custom distros even with RVA23. It solves the problem of binary compatibility - everything compiled for RVA23 should be pretty portable at the instruction level (won't help with the usual glibc nonsense of course).

But RVA23 doesn't help with the hardware layer - it's going to be exactly the same as ARM SBCs where there's no hardware discovery mechanism and everything has to be hard-coded in the Linux device tree. You still need a custom distro for Raspberry Pi for example.

I believe there has been some progress in getting RISC-V ACPI support, and there's at least the intent of making mconfigptr do something useful - for a while there was a "unified discovery" task group, but it seems like there just wasn't enough manpower behind it and it disbanded.

https://github.com/riscvarchive/configuration-structure/blob...

https://riscv.atlassian.net/browse/RVG-50


> You still need a custom distro for Raspberry Pi for example.

Are you sure that's still the case? I just checked the Raspberry Pi Imager and I see several "stock" distro options that aren't Raspbian.

Regardless, I take your point that we're reliant on vendors actually doing the upstreaming work for device trees (and drivers). But so far the recognizable players in the RISC-V space do all(?) seem to be doing that, so for now I remain hopeful that we can avoid the Arm mess.


I'm not totally sure, but I would imagine those stock distros actually have dedicated packages for Raspberry Pi kernel images.

See this for example: https://www.phoronix.com/news/Raspberry-Pi-5-Ethernet-Linux

If you look at the patch series, it's directly adding information about the address of the ethernet device. That's the sort of thing that would be discovered automatically in the x86 world. It wouldn't need to be hard-coded into the kernel for each individual board that is supported.


I feel this is becoming a bit of a tech urban legend such as ZFS requires ECC.

As far as I understand the RVA23 requirement is an ubuntu thing only and only for current non LTS and future releases. Current LTS doesn't have such requirements and neither other distributions such as Fedora and Debian that support riscv64.

So no, you are not stuck running custom vendor distros because of this but more because the other weird device drivers and boot systems that have no mainline support.


I'm fairly sure I recall Fedora folks signaling that they intend to move to RVA23 as soon as hardware becomes generally available.

It is of course possible that Debian sticks with RV64GC for the long term, but I seriously doubt it. It's just too much performance to leave on the table for a relatively new port, especially when RVA23 will (very) soon be the expected baseline for general-purpose RISC-V systems.


As someone from the Fedora/RISC-V project, it'll depend on what our users want. We cannot support both RV64GC and RVA23 (because we don't have the build or software infra to do it) so we have to be careful when we move. Doing something like building with RV64GC generally but having targeted optimizations - perhaps two kernel variants and some libraries - might be possible, but also isn't easy.

Things are different for CentOS / RHEL where we'll be able to move to RVA23 (and beyond) much more aggressively.


First things first: thank you for your work.

That being said: does it make sense to keep a nee but low performance platform alive? As the platform is new and likely doesn’t have many users, wouldn’t it make sense to nudge (as in “gently push”) users towards a higher performance platform?

Chances are the low-performance platform will die anyway, and fedora will not be exploiting the full offering of the high performance platform.


It's about what users think in our forums: https://discussion.fedoraproject.org/tag/risc-v-sig


I'm not completely sure, but I suspect Fedora will stick to the current baseline for quite some time.

But the baseline is quite minimal. It's biased towards efficient emulation of the instructions in portable C code. I'm not sure why anyone would target an enterprise distribution to that.

On the other hand, even RVA23 is quite poor at signed overflow checking. Like MIPS before it, RISC-V is a bet that we're going to write software in C-like languages for a long time.


> On the other hand, even RVA23 is quite poor at signed overflow checking

When I tried to measure the impact of -ftrapv in RVA23 and armv9, it was roughly the same: https://news.ycombinator.com/item?id=46228597#46250569

reminder:

    unsigned 64-bit:
    add: RV: add+bltu       Arm: adds+bcc
    sub: RV: sub+bltu       Arm: subs+bcs
    mul: RV: mulhu+mul+beqz Arm: umulh+mul+cbz
    
    unsigned 32-bit:
    add: RV: addw+bgeu     Arm: adds+bcc
    sub: RV: subw+bgeu     Arm: subs+bcs
    mul: RV: mul+slli+beqz Arm: umul+cmp lsr 32

    signed 64-bit:
    add: RV: add+slt+slti+beq  Arm: adds+bcc
    sub: RV: sub+slt+slti+beq  Arm: subs+bcs
    mul: RV: mulh+mul+srai+beq Arm: smulh+mul+cmp asr 63
    
    signed 32-bit:
    add: RV: addw+add+beq   Arm: adds+bvc
    sub: RV: subw+sub+beq   Arm: subs+bvs
    mul: RV: mul+sext.w+bew Arm: smul+asr+cmp asr 31


> On the other hand, even RVA23 is quite poor at signed overflow checking.

On the other hand it avoids integer flags which is nice. I doubt it makes a measurable performance impact either way on modern OoO CPUs. There's going to be no data dependence on the extra instructions needed to calculate overflow except for the branch, which will be predicted not-taken, so the other instructions after it will basically always run speculatively in parallel with the overflow-checking instructions.


It's nice for a C simulator to avoid condition codes. It's not so nice if you want consistent overflow checks (e.g., for automatically overflowing from fixnums to bignums).

Even with XNOR (which isn't even part of RVA23, if I recall correctly), the sequence for doing an overflow check is quite messy. On AArch64 and x86-64, it's just the operation followed by a conditional jump: https://godbolt.org/z/968Eb1dh1


Non-flag based overflow checks are still pretty cheap. The overflow check is only 1 extra instruction for unsigned (both add and multiply), and 3/4 extra for signed overflow (see https://godbolt.org/z/nq1nb5Whr for details). It's also worth noting that in many cases, the overflow checks will be removable or simplify-able by the compiler entirely (e.g. if you're adding 1 or know the sign of one of the operands etc). As such, the extra couple instructions are likely worthwhile if it makes designing a wider core easier. Signed overflow instructions would be reasonable to add, but it's not like modern high performance cores are bottlenecked by scalar instructions that don't touch memory anyway.


Our CI workflow literally just invokes a plain old shell script (which is runnable outside CI). We really don't need an overcomplicated professional CI/CD solution.

One of the nice things about switching to Forgejo Actions is that the runner is lightweight, fast, and reliable - none of which I can say for the GitHub Actions runner. But even then, it's still more bloated than we'd ideally like; we don't need all the complexity of the YAML workflow syntax and Node.js-based actions. It'd also be cool for the CI system to integrate with https://codeberg.org/mlugg/robust-jobserver which the Zig compiler and build system will soon start speaking.

So if anything, we're likely to just roll our own runner in the future and making it talk to the Forgejo Actions endpoints.


> The reason they move to a lesser known Git provider sounds more like a marketing stunt.

We had technical problems that GitHub had no interest in solving, and lots of small frustrations with the platform built up over years.

Jumping from one enshittified profit-driven platform to another profit-driven platform would just mean we'd set ourselves up for another enshittification -> migration cycle later down the line.

No stunt here.


Well that explains a lot, because I thought that you guys moved due to their direction sounds more like a political act.

Btw why not GitLab?


Worth noting that LLVM has AVR and MSP430 backends, so there's no particular resistance to 8-bit/16-bit targets.


Oh, thanks for the correction. I couldn’t find a conprehensive list of backends (which is weird) and the lists I did find only included 16+ bit targets.


Hug of death followed by a DDoS. At the time of me writing this, it loads instantly again.


As I pointed out in a different comment, even IBM have to maintain a GitHub Actions runner fork with s390x support because upstream just cannot even be bothered to accept the relevant patches: https://github.com/uweigand/runner

If IBM cannot get Microsoft to work with them on something so small but impactful, there's no chance we can.

> Personally - I think GitHub is a cultural artifact now. Of the entire planet. Hackers and curious minds from Japan to Alaska and everything in-between flock to GitHub.

And it's in the hands of a for-profit company pushing LLM nonsense. That should be alarming! Let's instead encourage people to use platforms managed by non-profits.


> obscure OS not being supported

Believe it or not, there are platforms outside of the big 3.

The GitHub Actions runner does not work on FreeBSD, NetBSD, OpenBSD, and illumos, all of which are operating systems we either have existing support for, or intend to start supporting properly soon. (We already have FreeBSD CI; machines for the other 3 are arriving at my place tomorrow as it happens.)

And that's ignoring CPU architectures; the upstream GitHub Actions runner only supports x86 and aarch64. We had to maintain a fork that adds support for all the other architectures we care about such as riscv, loongarch, s390x, etc. We will also likely be adding mips64 and powerpc64 to the mix in the future.

Even IBM have to maintain an s390x fork because Microsoft can't even be bothered to accept PRs that add more platforms: https://github.com/uweigand/runner


> We already have FreeBSD CI; machines for the other 3 are arriving at my place tomorrow as it happens.

That's great. I hope it works out, and you have CI for NetBSD, OpenBSD, and illumos, too.

Go's support for NetBSD has been a big boon to the more casual NetBSD user who isn't going to maintain a port. It means a random Go open-source project you use probably works on NetBSD already, or if it doesn't, it can be fixed upstream. Maybe Zig could play a similar role.

It's a shame GitHub doesn't have native CI even for FreeBSD on x86-64. I can see the economic case against it, of course. That said, the third-party Cross-Platform GitHub Action (https://github.com/cross-platform-actions/action) has made Free/Net/OpenBSD CI practical for me. I have used it in many projects. The developer is currently working on OmniOS support in https://github.com/cross-platform-actions/omnios-builder.


> Go's support for NetBSD has been a big boon to the more casual NetBSD user who isn't going to maintain a port. It means a random Go open-source project you use probably works on NetBSD already, or if it doesn't, it can be fixed upstream. Maybe Zig could play a similar role.

In fact, we do already have cross-compilation support for NetBSD (and FreeBSD). But we currently only "test" NetBSD by building the language behavior tests and standard library tests for it on Linux, i.e. we don't actually run them, nor do we build the compiler itself for NetBSD. Native CI machines will allow us to fill that gap.

As it happens, Go's cross-compilation support does indeed make our lives easier for provisioning CI machines since we can build the Forgejo Runner for all of them from one machine: https://codeberg.org/ziglang/runner/releases/tag/v12.0.0


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: