I think Coccinelle is a really cool tool, but I find its documentation totally incomprehensible for some reason. I've read through it multiple times, but I always end up having to find some preexisting script that does what I want, or else to blunder around trying different variations at random until something works, which is frustrating.
"The specification describes bits as combinations of 0, 1, and x, but also sometimes includes (0) and (1). I’m not sure what the parenthesized versions mean"
the answer is that the (0) and (1) are should-be-zero and should-be-one bits: if you set them wrongly then you get CONSTRAINED UNPREDICTABLE behaviour where the CPU might UNDEF, NOP, ignore that you set the bit wrongly, or set the destination register to garbage. In contrast, plain 0 and 1 are bits that have to be that way to decode to this instruction, and if you set them to something else then the decode will take you to some other instruction (or to UNDEF) instead.
This is an important ISA feature -- an instruction encoding that is wasteful of its encoding space is one that has no room for future new instructions (or which has to encode the new instructions in complicated ways to fit in whatever tiny "holes" are left in the encoding space).
The old 32-bit Arm encoding had this problem, partly because of the "all instructions are conditional" feature. Even after the clawback of the "never" condition that wasted 1/16 of the available instruction encoding space as NOPs, it was tricky to find places to put new features.
This is a result of the market and its demands, not something specific to the architecture. In desktop and server, customers demand that they can buy a new machine and install a previously released stable OS on it. That means the vendors will implement the necessary standards and cross compatibility to make that happen. In the embedded market, customers don't demand that, and so vendors have no incentive to provide it. Instead what you get is that the specific combined hardware-and-software product works and is shipped with whatever expedient set of hacks gets it out of the door. Having a new cool hardware feature that works somehow or other is more important for sales than whether that driver is upstream or there's a way to describe it in ACPI.
Where Arm is in markets that do demand compatibility (i.e. server) the standards like UEFI and ACPI are there and work. Where it's in markets like embedded, you still see the embedded profusion of different random stuff. Where other architectures are in the embedded market, you also see a wide range of different not very compatible hardware: look at riscv for an example.
It's not a completely non special character: for instance in bash it's special inside braces in the syntax where "/{,usr/}bin" expands to "/bin /usr/bin". But the need to start that syntax with the open brace will remind you about the need to escape a literal comma there if you ever want one.
Some of the 386 bugs described there sound to me like the classic kind of "multiple different subsystems interact in the wrong way" issue that can slip through the testing process and get into hardware, like this one:
> For example, there was one bug that manifested itself in incorrect instruction decoding if a conditional branch instruction had just the right sequence of taken/not-taken history, and the branch instruction was followed immediately by a selector load, and one of the first two instructions at the destination of the branch was itself a jump, call, or return.
Even if you write up a comprehensive test plan for the branch predictor, and for selector loads, and so on, it might easily not include that particular corner case. And pre silicon testing is expensive and slow, which also limits how much of it you can do.
80386 (1985) did not have a branch predictor, which was used first only in Intel Pentium (1993).
Nevertheless, the states of the internal pipelines, which were supposed to be stopped, flushed and restarted cleanly by taken branches, depended on whether the previous branches had been taken or not taken.
Ah, thanks for that correction -- I jumped straight from "depends on the history of conditional branches" to "branch predictor" without stopping to think that that would have been unlikely in the 386.
Before having branch predictors, most CPUs that used any kind of instruction pipelining behaved like a modern CPU where all the branches are predicted as not taken.
Thus on an 80386 or 80486 CPU not taken branches behaved like predicted branches on a modern CPU and taken branches behaved as mispredicted branches on a modern CPU.
The 80386 bug described above was probably caused by some kind of incomplete flushing of some pipeline after a taken branch, which leaved it in a state partially invalid, which could be exposed by a specific sequence of the following instructions.
This sort of bug, especially in and around pipelines are always hard to find. In chips I've built we've had one guy who built a system that would build random instruction streams to try and trigger as many as we possibly could
Yeah, I think random-instruction-sequence testing is a pretty good approach to try to find the problems you didn't think of up front. I wrote a very simple tool for this years ago to help flush out bugs in QEMU: https://gitlab.com/pm215/risu
Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.
Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.
Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.
But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.
"Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."
- I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?
"Secondly, I think when a friend is giving advice the responses are more likely to be advice"
- You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model.
Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.
But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).
In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.
I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.
If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.
It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.
It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.
Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.
But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.
You can have either completely separate hot and cold taps, each with their own spout, or you can have a setup with separate hot and cold knobs and a single spout.
For my bathroom sink I specified a two taps/one spout unit (similar to this one https://www.screwfix.com/p/swirl-traditional-chrome-104mm-cl... ) because I prefer to be able to get "this is definitely cold water with absolutely no hot water mixed in" when that's what I want. (My hot water comes from a combi boiler, so if you run the hot tap for a short time all that happens is you burn some gas when the boiler detects the water flow but you just get the cold water in the pipe.)
I like the combined handle type for showers, where you always want some hot water and are generally running the water for a long time.
Tape save and load seemed pretty reasonable on our system. It does depend rather on the tape deck you're using and also on getting the volume and tone settings right, though. We had the official TI tape deck for it.
reply