> Where is the RTL? Where are the GDSII masks? Why am I unable to look at the branch predictor unit in the Github code viewer? Or (God forbid!) the USB/HDMI/GPU IP? I reject the notion that these are unreasonable questions.
As you note correctly, the ISA is open, not this CPU (or board).
The important point is that using an open ISA allows you to create your own CPU that implements it. This CPU can then be open (i.e. you providing the RTL, etc.), if you so desire
I assume it will be much more difficult (or impossible?) to provide the RTL for a CPU with an AMD64 ISA, since that one has to be licensed. I wonder if you paying for the license allows you to share your implementation with the world. Even if it does, it's less likely that you will do so, given that you will have to pay for the licensing fee and make your money back
Since there is no license to pay for in case of RISC-V, it allows you to open up the design of your CPU without you having to pay for that privilege
My superficial understanding is that arm does not prevent from sharing implementation details of your own design but most chips also license a starting implementation that has such limitations. So the end result is often more restricted than the ISA licence some would require
Most ARM licensees aren't permitted to create custom implementations, only to use IP cores provided by ARM. There are a couple of companies who do have an architectural license, allowing them to create their own implementations, but there are only a few of those and they aren't likely to share. (It's also possible that the terms of their license prohibit them from making their designs public.)
The important point is that using an open ISA allows you to create your own CPU that implements it.
So? You've been able to do that since...computers. Anyone can roll their own ISA any time they want. It's a low-effort project that someone with maybe a Masters student level of knowledge can do competently. When I was in school, we even had a class where you would cook up an (simple) ISA and implement it (2901 bit-slice processors); these days they use FPGAs.
So you got your own processor for your own ISA...that was slow, expensive (no economy of scale) and without a market. But very fun, and open source, at least. And if "create your own CPU that implements it" is what you want, go forth and conquer...everything you need is already there and has been for a long time.
But if your goal is "I want an open source ISA that I can produce that's price and/or performance competitive with the incumbents", well, that's a totally different ballgame.
And there are open source ISAs that have been around for decades (SPARC, POWER, SuperH). These are ISAs that already have big chunks of ecosystem already in place. The R&D around how to make them competitive already exists. Some, like LEON SPARC have even gone into something like production (and flown in space).
So, yes, an open source ISA affords the possibility that we can make processors based on our own ISAs on our own terms. It has even in extremely rare occasions produced a product. But the fact remains, the market hasn't cared in the slightest to invest what's required to turn that advantage into a real competitor to the incumbent processors.
Yes, you can create your own ISA. But to run what software?
If I create my own RISC-V implementation, I can install Ubuntu on it. Maybe even Steam.
See the difference?
And, the market has responded with a tidal wave of CPU contenders. Like in the rest of the world, not all of them target the highest end portion of the market. But some are choosing to play there. Have you checked-out Ascalon?
And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.
> And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.
Ventana is an extremely bad example to be used here. It is acquisition price is undisclosed, it could be just some $ for acquiring the team behind it. Secondly, Qualcomm's nuvia acquisition was pretty huge, there is no reason whatsoever to believe the Ventana acquisition is remotely comparable, that proves no one uses RISC-V anyway.
I notice that the three benefits they flag for RISC-V are: flexibility, control, and visibility.
I wonder how they felt about "control" after ARM tried to stop them from commercializing the value of their Nuvia acquisition? I wonder if it had anything to do with their next big acquisition being RISC-V based instead?
I also wonder, why on their Oryon page does Qualcomm never meanion ARM. Not even once. Even to the question, is Oryon x86, they do not answer that it is ARM. Why not?
Why don't you read what was written instead of being the unthinking RISC-V fanboi in the room. My only point was that the RISC-V license is probably not the biggest factor in its success, since there have been many, many open source ISAs that weren't successful.
I agree! Science is about experiments to verify hypotheses. Design of Experiments seems like a fundamental part of that. That's also why the quote below made me laugh.
> What if you don’t care about efficiency or causality?
"Yeah, what about if you don't care about money/time and are happy with finding a correlation only?!!?"
I like the idea of Gemini and was inspired to write a script to turn my blog posts written in markdown to gemtext. Sadly I still haven't finished that script ...
My main issue with the protocol is that it is requiring creating a new TLS connection for every request. That is indeed a simple approach but I argue that the extra round trip times added due to this are not worth the trade-off for the simplicity gained in this case
Coming up with a simple way to reuse a connection would reduce the round trips needed drastically. If we put our heads together, I feel like we could come up with a way to do that, that doesn't overly complicate the protocol ...
I have only just started out but it feels nice indeed! A hindrance is that I am not very artistically gifted, but as long as I make it mostly for myself, I don't mind too much.
Hm, I am using [dwm](https://dwm.suckless.org/) with a custom keybinding to shift to the left or right workspace. That seems similar enough, other than the fact that changing the split ratio will affect all workspaces on dwm while on Niri it most likely will not ...
I use a variety of DEs and WMs but I still can't find anything better than dwm for my desktop. If I need some extra controls, xfce4-panel runs modularly and neatly covers the main bar for whatever workspace it's on. It handles both tiling and floating perfectly. I hope more software projects pick up the focus on simplicity, especially making programs as easy to reconfigure and compile as dwm.
In general, I think having ligatures in a monospace fonts is a bad idea.
The reason is that in a monospaced font all the glyphs are supposed to the same advance value. This forces the ligatures to always take up the same space as two (or however many glyphs) are involved, which may stretch the ligature glyph in a non-intended way (if it is even possible to rasterise it that way).
Monospace fonts are also often used for programming (because they make sure that the columns line up consistently, regardless of the font being used). I personally don't see the point in showing ligature glyphs that do not correspond to the actual Unicode code points encoded to bytes in the source code.
I thought the same, but when Berkeley Mono got ligatures I gave them a go and never turned them off.
I think the truth is that any good monospace font is designed with an awareness of the grid those characters are laid out in. The rhythm and stability of that grid is a feature of monospace fonts. It lets us line up text, draw shapes and so on.
You would think not having the underlying characters visible would be an issue, but ligatures are just symbols like any other. In a short time you learn to read them, like you would any contracted word.
It is probably a bit easier to start from a language you are familiar with. That image intentionally is a mismatch of random arrows and operators that don't necessarily align to the semantics of real code.
I think that's one of the things Fira Code's Readme [1] does a better job at than Berkeley Mono's page. The top big image breaks down the ligatures in high level categories or the programming language they are most associated with, side by side the version with a ligature. Further down the Readme you can several real examples from programming languages with the ligatures called out, giving you the context clues of what it looks like in a language you may be already familiar with.
Rationally, what you say makes sense, of course. But I love ligatures for programming. First of all, I think they just look nice.
But second, I also feel that for me, they make code a bit more readable. Without ligatures, multiple characters are often used to create one symbol; with ligatures, one symbol is always rendered as one single visual character. So if I read code, it just feels a bit easier for my brain to parse ≥, rather than >=.
> So if I read code, it just feels a bit easier for my brain to parse ≥, rather than >=.
Clearly there's personal preferences involved, so there's no objectively better or worse, but it still blows my mind, because reading ligature symbols like ≥ and ≠ always makes my brain skip a beat, so I need to reread a few times to "get it".
Some of that subjective influence with ≥ and ≠ especially is how much time you've spent in math courses or reading math papers. For some of us those have always been the "real" operators and >= and != the fallback replacements that look "close enough" in easy to type ASCII. We were sort of doing the opposite all along, translating the ASCII breakdowns into the math notation in our heads, and ligatures can feel like a bit of a relief because now you see the "real thing".
At least for programming ligatures, wouldn't they tend to be shown as a single glyph (and occupy the same space)? I don't like them, but for people that do, I expected ≠ to be displayed instead of !=, not a different longer glyph.
In coding fonts with ligatures, ligatures usually have the same width as all of the component characters combined. So if you type !=, you will see a character looking like ≠ that is two advances wide. You will even be able to select the middle of the character, hit backspace, and delete the (invisible) "!".
This is necessary because if all ligatures were one advance wide, you couldn't easily tell the difference between =, ==, and ===, or between != and !==.
I would rather != stay the way it is, and have the language support ≠ as a synonym. 30 years ago, on the mac's HyperCard, the language supported ≠ as an inequality comparator (as well as ≤ and ≥ for <= and >=, respectively). On the mac it's easy to type with the Option key and =.
Back then, those characters weren't easy to type on dos/windows so it seems to be a case of being stuck with the "lowest common denominator" in terms of character input across OSes, reminiscent of C's trigraphs where "??<" was used because keyboards didn't have "{"—thankfully those are long gone. Ligatures are a hack around that but it always struck me as an inelegant solution.
There are one or two advantages over regular GUIs, but that's it.
The biggest is probably that they are lightweight since there are no GUI library dependencies (and if there are TUI ones, they are usually much lighter than their GUI sisters). This also means there are fewer (if any) dependencies to distribute compared to a GUI.
The only other advantage I can come up with is that a TUI will have to be usable by keyboard only (in almost all cases). This is not a given for regular GUI libraries.
I'm not a fan of TUIs either. I think the only one I am using regularly is `tig` (https://jonas.github.io/tig/). I guess the reason is that I don't have to remember the git revision list syntax that way and that `tig` allows for easy commit searching with `/` ...
As you note correctly, the ISA is open, not this CPU (or board).
The important point is that using an open ISA allows you to create your own CPU that implements it. This CPU can then be open (i.e. you providing the RTL, etc.), if you so desire
I assume it will be much more difficult (or impossible?) to provide the RTL for a CPU with an AMD64 ISA, since that one has to be licensed. I wonder if you paying for the license allows you to share your implementation with the world. Even if it does, it's less likely that you will do so, given that you will have to pay for the licensing fee and make your money back
Since there is no license to pay for in case of RISC-V, it allows you to open up the design of your CPU without you having to pay for that privilege
reply