Hacker Newsnew | past | comments | ask | show | jobs | submit | isopede's commentslogin

Yeah, afaik arhitecture dynamic binary translation dates back to at least 1998 (VMware).

If you leave out the JIT part, binary translation dates back to at least 1966 (Honeywell).

Still one of the GOATs, agree.


Claims of ‘firsts’ undermine the authority of this document, though not the achievements of the subject.

For instance Marco Ternelli’s dynamic binary translator ZM/HT dates back to 1993, when it was published by Ergon Development. It translates Z80 to 68000 machine code on the fly and was a successful commercial product. I’d be interested to hear of earlier JIT binary to binary implementations, especially others which coped with self-modifying code, without which ZM/HT wouldn’t have been very useful.

Self-unpacking executables are at least a decade older, and Fabrice quite likely had Microsoft’s 1985 EXEPACK, written by Reuben Borman, on his computer when he came up with LZEXE. That was bundled with MASM and Microsoft C 3.0, their first in-house version. Both were preceded by Realia’s Spacemaker product, which Wikipedia says was written by Robert B. K. Dewar in 1982.


Thanks for the reference to https://en.wikipedia.org/wiki/Honeywell_200 apparently its claim to fame was it could run IBM 1401 programs faster than a 1401 for less money.

> Compatibility with the IBM/1400 Series has, of course, been a key factor in the success of the Series 200. The principal software components in Honeywell's "Liberator" approach are the Easytran translators, which convert Autocoder source programs written for the IBM machines into Easycoder source programs which can be assembled and run on Series 200/2000 systems, usually with little or no need for manual alterations. The Easytran routines have effectively overcome the minor differences between the instruction sets and assembly languages of the two systems in literally hundreds of installations.

from https://bitsavers.org/pdf/honeywell/datapro/70C-480-01_7404_...

https://cdnibm1401.azureedge.net/1401-Competition.html

It appears that Honeywell Liberator was a program to convert 1401 assembly to Easycoder, the Honeywell 200 assembly format.


Look, my setup works for me. Just add an option to reenable spacebar heating.


I have such fond memories of the Nokia N810.

I did my master’s thesis on that device. I had a custom hypervisor running a guest kernel, virtualized networking, and a buildroot userspace. I could SSH into the host N810, then SSH into the guest. I even virtualized the framebuffer at some point and got the “dancing baby” animation playing from the guest. It only ran at a couple frames per second, but it was _amazing_.


The only weird thing about it was that you couldn't charge a fully empty N810 with the micro(?) usb charger. It'd charge just enough to boot and then crash again, because it couldn't wake up far enough to negotiate a higher current with the charger.

Had to use a barrel plug to charge it.

Spent a very nervous and sweaty day figuring that out when I bought one used with no warranty or returns and it didn't boot properly =)


It sure is a weird thing, but yes, the first mobile devices that shipped with USB didn't really know how to charge off it.


Which, to be fair to them, usb was never supposed to be a power delivery standard (at least not more than the 5 volts needed to power a mouse)


And now pretty much all of the portable devices in my house can be charged with 5V/2A USB :D


Somebody sell me on these newfangled tiling WMs. I have been using basically the same xmonad configuration for 15+ years, pretty much updating it only on breaking or deprecated changes. What do all these new Wayland compositors have to offer except "tiling, but for wayland?"

Does Wayland actually work now? I've tried it every few years for over a decade now and every time I ran into showstopper bugs (usually on nvidia cards).


Nvidia + Arch + Gnome3 + Wayland user here. I've tried Wayland on/off for the last couple of years, and made the switch I think late last year sometimes once I stopped seeing very obvious bugs/issues. Just about everything works fine nowadays in my experience.

Mostly made the switch because Wayland seems to run a lot smoother and efficient, especially when it came to Firefox for some reason.


Be careful. There are still showstoppers in wayland implementations, if you do anything that isn't common for a Linux user. Example: I am still unable to change the orientation of my drawing tablet.

There are many like this. It mostly works, but it isn't as flawless as just using X11 (unless we are talking about displays and stuff).

Nvidia works since driver 570.

(Edit: grammar and Nvidia note)


Oh weird, I never had a problem like that with my drawing tablet, but then again I dumped Nvidia in 2011 when I switched to archlinux fulltime and had to fix my install twice because of the drivers not being compatible with the latest kernel.

What I still miss is stuff like browser docks in OBS and such things that just work in X but are being dragged on for multiple years to be supported on Wayland now (CEF thing though)


On Gnome + NVIDIA (RTX 2080) + Wayland + 3 Monitors [1DP 4K, 2HDMI (2K, FHD)]

Every time I try it, I am really impressed with the smoothness! But every time, two issues come up, which most likely due to NVIDIA, which are complete showstopper.

1. After inactivity period, monitors turns off. When I resume, one monitor won't come back up. I have to deactivate it on control panel, and cancel to get it back. Doing it many times a day, is extremely annoying. This does not happen on two monitors.

2. Monitors won't turn off ... yeah, after inactivity period the monitor's blanks, but, briefly turns off, and then turns back on. And then never turns off. This mostly happens after playing games.

I think both of the issues are due to NVIDIA.

Otherwise, Wayland has become really solid.

Using i3 now, it's not much, it's boring, and that's a good thing.


I think "tried it on _what_?" is the question -- which distribution, etc.? I've been using Wayland on Fedora for years and don't have any complaints. My primary laptop/desktop has an Intel graphics chipset, but I've tested it on laptops w/NVIDIA and not had problems.


It's been a few years since I last looked at it, but I've tried daily running it probably 4 or 5 times over the last 15 years. Usually on Arch, but also some Debian/Ubuntu-based distros. It's fuzzy now but I've tried probably every NVIDIA GPU generation since the GTX 500 series.

I can't remember all the bugs, but I've definitely at least encountered all flavors of flickering bugs, stale updates, GPU crashes, failed copy and paste, failed screenshares, failed videoconferences...

From comments on this thread, it sounds like things have drastically improved and its probably time to take another look.


The version of NVIDIA drivers that Debian ships with lacks explicit sync even now. Pretty much every other distro should work though.


In the same boat with you. Not quite the same configuration (some version change issues, lost it once in an 'rm' accident that followed a symlink to / [I learned that day...] and had to start from scratch, rewrote for fun once), but my sole desktop from '09 to '23 when I switched to Niri. My reasoning here: https://news.ycombinator.com/item?id=45462034

This was on my Bonobo WS (PopOS) w/ 2x NVidia GTX 1080s, multiple screens (2 1080p, 1 4k at 2x scaling), etc. No issues other than app support.

Highly recommend trying it. Very low barrier to entry.


KDE, Gnome and others obviously do provide stacking windows, but you do get the impression that writing a stacking window manager/compositor is just extremely hard to do with Wayland. Someone is maintaining a list of compositors[1] and there do seem to be a number of stacking ones, they just don't really get much attention.

1) https://www.gilesorr.com/wm/table.html


The audience of stacking wms is mainly serviced by the desktop environements. Both Gnome and Plasma are bigger than everything else combined.


I just setup Asus Rog G14 with nvidia 3060, I was skeptical against Wayland but basically got it working straight away with only setting drm.modeset (thanks chatgpt?).

So two external monitors working, except if they are daisy chained I am logged out when (dis)connecting them. So I use one hdmi and one dp over usb-c and it works.

So, not 100% but works better than X for me. Still too recent to have seen all the edge cases though.


This is a scrolling WM (not tiling). I've been using it as my daily driver for over a year now, and it's awesome. I never liked tiling WMs because I do a lot of web work, and I often want a large code editor and a large browser window and a few terminals open. I don't like having stuff scrunched into a little rectangle, but I do like having all of that related stuff grouped in a single workspace. This works perfectly with Niri. I can keep my editor in the center, a peek of my browser to the right and a peek of my terminal to the left, and easily flip between them, resize, stack, etc.

I know it doesn't sound all that interesting, but once I used it for a while, I just couldn't go back.


See my comment above (moved from i3wm) but my spec is

RTX 3090, Pop OS 24.04 (beta), 4K 43" Monitor,

Nvidia cards worked out the box with no problems


I strongly believe that we will see an incident akin to Therac-25 in the near future. With as many people running YOLO mode on their agents as there are, Claude or Gemini is going to be hooked up to some real hardware that will end up killing someone.

Personally, I've found even the latest batch of agents fairly poor at embedded systems, and I shudder at the thought of giving them the keys to the kingdom to say... a radiation machine.


The Horizon (UK Royal Mail accounting software) incident killed multiple postmasters through suicide, and bankrupted and destroyed the lives of dozens or hundreds more.

The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.


From what I've read about that incident I don't know what the devs could have done. The company sure was a problem but also the laws basically saying a computer can't be wrong. No dev can solve that problem.


> Engineers are legally obligated to report unsafe conduct, activities or behaviours of others that could pose a risk to the public or the environment. [1]

If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:

> Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]

[1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...

[2] https://en.wikipedia.org/wiki/British_Post_Office_scandal


I don't think it's fair to blame individual developers for a systemic failure. Its not their fault there is no governing body to award or remove the title of "software engineer" and promote the concept of a software engineer refusing to do something without harming their career. Other engineering disciplines have laws, lobbied for by their governing body, that protect the ability of individual engineers to prevent higher-ups from making grave mistakes.


> Its not their fault there is no governing body to award or remove the title of "software engineer" and promote the concept of a software engineer refusing to do something without harming their career.

Those governing bodies didn't form by magic. If you look at how hostile people on this site are to the idea of unionization or any kind of collective organisation, I'd say a large part of the problem with software is individual developers' attitudes.


I have worked in this industry for 20 years and never met a piece of software I would deem "safe". It's all duct tape and spit. All of it.

I have had software professionally audited by third parties more than a few times, and they basically only ever catch surface level bugs. Recently, the same we the audit finished we independently found a pretty obvious sql injection flaw.

I think the danger is not in producing unsafe software. The real danger is in thinking it can ever can be safe. It cannot be, and anyone who tells you otherwise is a snake oil salesman.

If your life depends on software, you are one bit flip from death.


Then you haven't read deep enough into the Horizon UK case. The lead devs have to take a major blame for what happened as they lied to the investigators and could have helped prevent early on some suicides if they had courage. These devs are the worst kind of, namely Gareth Jenkins and Anne Chambers.


as you point out this was a messup on a lot of levels. its an interesting effect tho not to be dismissed. how your software works and how its perceived and trusted can impact people psychologically.


It was a distributed system lashed together by 'consultants' (read: recent graduates with little real world software engineering experience) in an era where best practices around distributed systems were non-existent. They weren't even thinking about what kind of data inconsistencies they might end up with.


The code being absolute dog shit was true regardless of that law's existence. There are plenty of things the developers could have done.

That law is irrelevant to this situation, except in that the lawyers for Fujitsu / Royal Mail used it to imply their code was infallable.


Given whole truth testimony?


But there is still a difference here. Provenance and proper traceability would have allowed the subpostmasters to show their innocence and prove the system failable.

In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.


I’m not sure it would. Remember that the prosecutors in this case were outright lying to the courts about the system! When you hit that point, it’s really hard to even get a clean audit trail out in the open any more!


I don't understand the distinction here.

> Provenance and proper traceability would have allowed

But there wasn't those things, so they couldn't, so they were driven to suicide.

Bad software killed people. It being slow or fast doesn't seem to matter.


Slow killing software can be made more secure by adding the possibility for human review.

Fast killing software is too fast for that.


I'm really trying to understand your point, but I am failing.

It sounds like you're saying that you shouldn't care as much about the quality of "slow killing software" because in theory it can be made better in the future?

But... it wasn't though? Horizon is a real software system that real developers like you and me built that really killed people. The absolutely terrible quality of it was known about. It was downplayed and covered up, including by the developers who were involved, not just the suits.

I don't understand how a possible solution absolves the reality of what was built.


I teach the horizon post office scandal in my database courses. And my takeaway is, that software fails. And if people's lives are involved, an audit trail is paramount.

In slowly killing software the audit trail might be faster than the killing. In fast killing software, the audit trail isn't.


Yes, the audit trail that should exist is part of the package. Or more generically, Horizon should have had enough instrumentation, combined with adequate robustness, where they could detect the issues the lack of robustness caused, and resolve those issues without people dying.

My core point is that if you're designing a system, *any system*, you should be thinking about what is required to produce safe software. It isn't just "well I don't work on medical devices that shoot radiation at people, so I don't need to worry"[1]. You still need to worry, you just solve those problems in different ways. It's not just deaths either, it's PII leakage, it's stalking and harassment enablement, it's privilege escalation, etc.

[1] I have heard this, or a variation of this, from dozens of people over the my career. This is my core bug bear about Therac-25, is that it allows people to think this way, and divest themselves of responsibility. I am very happy to hear you are teaching a course about Horizon, because it's a much more grounded example that devs will hopefully see themselves in more. If your course is publicly available btw, I'd love to read it.


It's just a course about database design and in the first seminar we look at different news stories that have something to do with databases, like trump putting some random Italian chef on an international sanction list should make us think about primary keys and identifying people.

And the horizon post office scandal is the last and most poignant example that real people are affected by the systems we build and the design decisions we make. That sometimes easy to forget.


Non-agentic AI is already "killing" people by some definitions. There's a post about someone being talked into suicide on the front page right now, and they are 100% going to get used for something like health insurance and benefits where avoidable death is a very possible outcome. Self-driving cars are also full of "AI" and definitely have killed people already.

Which is not to say that software hasn't killed people before (Horizon, Boeing, probably loads of industrial accidents and indirect process control failures leading to dangerous products, etc, etc). Hell, there's a suspicion that austerity is at least partly predicated on a buggy Excel spreadsheet, and with about 200k excess deaths in a decade (a decade not including Covid) in one country, even a small fraction of those being laid at the door of software is a lot of Theracs.

AI will probably often skate away from responsibility in the same way that Horizon does: by being far enough removed and with enough murky causality that they can say "well, sure, it was a bug, but them killing themselves isn't our fault"

I also find AI copilot things do not work well with embedded software. Again, people YOLOing embedded isn't new, but it might be about to get worse.


The 737 MAX MCAS debacle was one such failure, albeit involving a wider system failure and not purely software.

Agreed on the future but I think we were headed there regardless.


Yeah reading this reminded me a lot of MCAS. Though MCAS was intentionally implemented and intentionally kept secret.


They killed "only" about 350 people combined, but the two fatal crashes of the Boeing 737 MAX in 2018 and 2019 were due to poor quality software:

https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...


> Personally, I've found even the latest batch of agents fairly poor at embedded systems

I mean even simple crud web apps where the data models are more complex, and where the same data has multiple structures, the LLMs get confused after the second data transformation (at the most).

E.g. You take in data with field created_at, store it as created_on, and send it out to another system as last_modified.


talk to anyone in the industries about 'automation' on medical or critical infra devices and they will tell you NO. No touching our devices with your rubbish.

i am pretty confident they wont let claude touch if it they dont even let deterministic automations run...

that being said, maybe there are places. but this is always the sentiment i got. no automating, no scanning, no patching. device is delivered certified and any modifications will invalidate that. any changes need to be validated and certified.

its a different world that makin apps thats for sure.

not to say mistakes arent made and change doesnt happen, but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon... give the folks in safety critical system engineering some credit..


> but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon

I don't have the same faith in corporate leadership as you, at least not when they see potentially huge savings by firing some of the expensive developers and using AI to write more of the code.


Neat. Anyone know what is used to make the animations? I like the graphic design!



Small but effective visual cues, smooth and carefully chromatic.

I am struck by the conceptual framework of classification tasks so snappily rendering clear categories from such fuzziness.


That band is often chosen _because_ of the absorption band of oxygen. Significant attenuation at those frequencies means it limits range, allowing for higher frequency reuse and less interference over a greater area.


Wouldn't it be the same if you just decreased the transmission power?


Interference is a real problem with FMCW radars, either maliciously in the case of electronic warfare, or accidentally in the case you mentioned, with many radars in the same space using the same frequency band. Wifi and cell phones use time division or frequency division multiplexing techniques, but radars (at least current-gen) generally do not.

There are mitigation techniques like randomization of chirp frequencies, choosing different idle times between frames, and signal processing techniques to try to detect interference and filter it out. In the general case, FMCW techniques will always have interference problems.

This is one reason amongst many others that military radars do not use FMCW but instead coded pulse compression techniques.


Is it though? I can run Windows programs from 20 years ago on my Windows machine just fine.

Issues with Linux binary distribution meanwhile are ubiquitous, with glibc probably being the single biggest offender. What's worse is that you can't even really statically link it without herculean effort. I've spent an inordinate amount of my life trying to wrangle third-party binaries on Linux libraries and it's just a sorry state of affairs.

Try taking a binary package from a vendor from even just 5 years ago and there's a non-zero chance it won't run on your modern distro.


You are talking about backward compatibility, the parent thread is about forward compatibility. You won't have much luck running a modern executable on XP unless the vendor went out of their way to make that happen.

> What's worse is that you can't even really statically link it without herculean effort.

The program we are discussing happens to be written in Go so it's trivial to build a statically linked executable.


Are you sure you want glibc statically linked into your go executable ?


glibc won't be used at all.

With Go on Linux libc is only needed when the libc DNS resolver is used (instead of Go's built-in one) or if C libraries are used. superfile doesn't need either of these so it's very simple to build it as a pure Go executable which will be statically linked and contain no C code at all.


Op's example however did use glibc though.


It's an interesting comparison. I agree that five years is well within the expected period of viability of an operating system. Some points to consider:

- any given release of a Linux distro will probably work on hardware released five years earlier -- one factor that reduces the cost of upgrading the OS (there are many more obvious factors)

- Microsoft is highly motivated to get customers to upgrade to the new Windows at the time. The legacy support is well-known as a "bone" (or: "a factor that reduces the cost of upgrading the OS")

- binary backwards/forwards compatibility is less of an issue in an environment that doesn't treat source code as a secret

- why run old versions of software? In other words: xterm is older than Windows and also as new as Windows

Also, I've always found it amusing that I have much less trouble running old windows software on a Linux (wine) than on new versions of windows.


C++ has long surpassed the point where mere mortals like me can understand it; It's so loaded with baggage, footguns, and inscrutable standards language that honestly I think the only thing keeping it going is institutional inertia and "backwards compatbility" (air quotes).

I work extensively in the embedded space and unfortunately C and C++ are still pretty much the only viable languages. I can not wait until the day rust or some other language finally supplants them.


I‘m currently doing work with Rust on Esp32 platforms and I‘ll have to say, it‘s not quite ready yet. Debug tools still have issues, and we‘re facing some problems with vendor specific magic in Esp IDF version 5.


What's stopping Rust from being used in embedded?


Among other things, tooling and vendor libraries. Vendor libraries are often composed of thousands upon thousands of lines of auto-generated C headers and written in some bespoke format. Demonstration code and/or sample drivers are almost invariably provided in C. Of course you _can_ rewrite these in Rust, but if you're an engineer trying to get shit working, you'd first basically have to reinvent the whole wheel just to do bringup.

I don't even want to talk about the state of proprietary vendor tooling...


Documentation is severely lacking and vendor specific libraries and build systems are sometimes interfering with cargo.

There‘s also the problem of rust-analyzer being relatively flaky in general and even more so when being used with environment specific / KConfig / build system feature flags that enable or disable certain library headers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: