I don't know, RISC-V doesn't seem to be very disruptive at this point? And what's the deal with specialized chips that the article mentions? Today, the "biggest, baddest" CPUs - or at least CPU cores - are the general-purpose (PC and, somehow, Apple mobile / tablet) ones. The opposite of specialized.
Are they going to make one with 16384 cores for AI / graphics or are they going to make one with 8 / 16 / 32 cores that can each execute like 20 instructions per cycle?
Most of the work that goes into chip design isn't related to the ISA per se. So, it's entirely plausible that some talented chip engineers could design something that implements RISC-V in a way that is quite powerful, much like how Apple did with ARM.
The biggest roadblock would be lack of support on the software side.
On the software side, I think the biggest blocker is affordable UEFI laptop for developers. A risc-v startup aiming to disrupt the cloud should include this in their master plan.
I came to this thread looking for a comment about this. I've been patiently following along for over a decade now and I'm not optimistic anything will come from the project :(
Yeah, I guess not at this point, but the presentations were very interesting to watch. According to the yearly(!) updates on their website, they are still going but not really close to finishing a product. Hm.
It has been catching up but is still inadequate, at least from the compiler optimisation perspective.
The lack of high-performance RISC-V designs means that C/C++ compilers produce all-around good but generic code that can run on most RISC-V CPU's, from microcontrollers to a few commercially available desktops or laptops, but it can't exploit high-performance CPU design features of a specific CPU (e.g. exploit instruction timings or specific instruction sequences recommended for each generation). The real issue is that the high-performant RISC-V designs are yet to emerge.
Producing a highly performant CPU is only one part of the job, and the next part requires compiler support, which can't exist unless the vendor publishes extensive documentation that explains how to get the most out of it.
Are they going to make one with 16384 cores for AI / graphics or are they going to make one with 8 / 16 / 32 cores that can each execute like 20 instructions per cycle?