DVI is a very large spec. It supports everything from pure VGA over it's analog pins as well as full fledged HDMI (its actually the other way around, HDMI is secretly just DVI). Finding monitors which support all of the simpler modes is an issue, whereas finding monitors which support old school VGA is fairly easy.
>its actually the other way around, HDMI is secretly just DVI
HDMI signalling is actually extension of DVI, it embeds TERC4 data islands containing extra metadata and PCM audio during blanking interval. (yes, I was surprised at beginning but HDMI/DVI also have blanking periods just as VGA)
It doesn't have any error detection or correction. The timing of the signal is still based on CRT TV's.
Really the whole lot should be done away with and replaced with something properly bidirectional, and preferably IP based so it can be routed anywhere.
Packets that still have to encode and adhere to strict VGA timings (Hsync/Vsync/blanking), and you arent allowed to send anything useful between the packets unless its a blanking period.
it's a little frustrating that things which used to be kosher, like nvidia and nvidia_uvm linking are all of a sudden not because they got caught up in this crossfire.
Caught in the crossfire? NVIDIA has been skirting around the GPL for years with its Linux module, without ever contributing back. They've being hindering the adoption of Wayland for years due their refusal to implement GBM in their driver. I guess they kinda deserve the flak, especially since AMD and Intel have shown the world time and time again that you can have a fully open GPU driver merged with Linux without any negative downsides.
So basically kernel team repeatedly breaking the driver is how they want to bludgeon nVidia into giving them all their code and forcing them into supporting their own APIs?
Seems like petty behaviour to break users hardware just because you disagree with driver licensing.
I wonder if you'd say the same if the team changing the API would be Googles and the teams constantly having to scramble to unbreak their code would be an opensource project.
Half the benefit of having open source drivers is so that the kernel team can update the drivers themselves when they change the API. I don't think there would be a lot of complaints from open source developers if Google were to change one of their APIs while submitting high quality patches to update all the open source projects that use it.
And it was never kosher and NVIDIA knew exactly that what they were doing was really controversial and risky (I was there). Management at NVIDIA needs to have a forcing function to open the the drivers before anything changes. They depend on and profit from Linux (ML!) so they have no choice but to comply.
IMO proprietary modules have never been kosher. They rely on a particular legal interpretation of the GPL that assumes that...
1. "Programs" (as defined by the GPL) can be legally separate works and share the same address space (on a platform where this arrangement is highly unusual)
2. Separate GPL Programs hosted in the same address space can share linked symbols determined to not be "internal APIs" (in a world where the Supreme Court might run roughshod over this and just say all API implementation is copyright-infringing)
3. "Operating system kernel" and "GPU driver for that self-same kernel" can be considered, regardless of address space colocation, to be separate GPL Programs.
This interpretation is highly unusual but holds primarily because Linus Torvalds and every other major kernel contributor endorsed it. Whether or not this constitutes promissory estoppel, implied license, or something else is up to Nvidia legal to decide; but it's highly likely that nobody with standing to challenge what would otherwise be an obvious GPL license violation is actually in a position to do so. Hence, Nvidia has access to a market they shouldn't.
RISC-V is inherently a customizable ISA though, whereas ARM implementations are very specific about what they require to be called an "ARM processor". This wouldnt change from this acq.
no. they're isolated for a reason, with the RISC-V processor being used as the controller to manage the behavior of the other parts of the chip. beyond just licensing ARM is expensive because it's required to implement a lot. With that chip being RISC-V they can make it as minimal and perfectly tuned as possible, so it's slow when it can afford to be cheap and fast when it needs to be.
That isn't the same at all. Canonical being a major backer of Linux is significant, but Linux is significantly open and diversified to where it does not stand or fail by one party, abet there are some that have more influence than others.
> GPUs are generally black boxes that you throw code at.
umm... what? what does that even mean? lol
I could kind of maybe begin understand your argument from the Graphics side, as users mostly interact with it at an API level, however keep in mind that shaders are languages the same way "cpu languages" work. It's all still compiled to assembly, and there's no reason that you couldn't make an open instruction set for a GPU the same as a CPU. This is especially obvious when it comes to Compute workloads, as you're probably just writing "regular code".
Now, that said, would it be a good idea? I don't really see the benefit. A barebones GPU ISA would be too stripped back to do anything at all, and one with the specific accelerations needed to be useful will always want to be kept under wraps.
Just 'cause Nvidia might want to keep architectural access under wraps doesn't necessarily mean that everyone else is going to, or that they have to in order to maintain a competitive advantage. CPU architectures are public knowledge, because people need to write compilers for them, and there are still all sorts of other barriers to entry and patent protections that would allow maintaining competitive advantage through new architectural innovations. This smells less of a competitive risk and more of a cultural problem.
I'm reminded of the argument over low-level graphics APIs almost a decade ago. AMD had worked together with DICE to write a new API for their graphics cards called Mantle, while Nvidia was pushing "AZDO" techniques about how to get the best performance out of existing OpenGL 4. Low-level APIs were supposed to be too complicated for graphics programmers for too little benefit. Nvidia's idea was that we just needed to get developers onto the OpenGL happy path and then all the CPU overhead of the API would melt away.
Of course, AMD's idea won, and pretty much every modern graphics API (DX12, Metal, WebGPU) provides low-level abstractions similar to how the hardware actually works. Hell, SPIR-V is already halfway to being a GPU ISA. The reason why OpenGL became such a high-overhead API was specifically because of this idea of "oh no, we can't tell you how the magic works". Actually getting all the performance out of the hardware became harder and harder because you were programming for a device model that was obsolete 10 years ago. Hell, things like explicit multi-GPU were just flat-out impossible. "Here's the tools to be high performance on our hardware" will always beat out "stay on our magic compiler's happy path" any day of the week.
You could make a standardized GPU instruction set but why would anyone use it? We don't currently access GPUs at that level, like we do with the CPU.
It's technically possible but the economics isn't there (was my point). The cost of making a new GPU generally includes writing drivers and shader compilers anyway, so there's not much of a motivation to bother complying with a standard. It would be different if we did expose them at a lower level (i.e. if CPU were programmed with a jitted bytecode then we wouldn't see as much focus on ISA as long as the higher level semantics were preserved)
You also inherit an entire chain of trust over code you yourself didn't write nor did anyone actually validate. The issue with leftpad.js wasn't that it was stupid, it was that it was dangerous.
That concern is somewhat orthogonal to the utility of a package manager itself. If you are using OSS in any way you need to pick and choose what you take on as a dependency. The package manager solves problems like distribution, dependency resolution, and discovery. The ease of use may contribute to poor decision making, which should not be wholly discounted.
To piggy back, this also goes down the dependency chain. Leftpad wasn't bad because it was being used directly. Projects imported other libraries which either directly pulled leftpad or, more likely, pulled another library which may be the calling party or not.
I disagree. Especially if those non-std libraries are built on other non-std libraries and so on. Trusting a single organization is much easier than trusting a chain of organizations.
this link in the article was cooler than the article itself: https://www.riaa.com/u-s-sales-database/ (be sure to change the metric to inflation adjusted revenue)
there's a really interesting story around Lauda Air flight 004 where Boeing attempted to write it off as pilot error but Niki Lauda basically threatened to go fly one himself and recreate the conditions as proof it was not. Eventually Boeing conceded and he didn't have to actually risk himself or another of his planes, but still an interesting anecdote.
He was severely burned in a Formula 1 race at the Nurburgring, and despite still suffering badly scarred lungs and weeping wounds he was back in the seat before the end of the season and only lost the championship that year because he refused to go out in the pouring rain to race the Japanese Grand Prix as he decided it would have been foolhardy.