Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good writeup. Except this part:

"Old devs that did Hardware know modern IT are heading the wrong direction. But, business men don't care, journalists interview the successful business men, not the coders that are doing the actual job and know how the machinery behaves."

The majority was going in that direction. However, there's been steady uptake of very different models that fight some of the problems you mention. Here's a few of them:

1. Simple SIMD/MIMD accelerators like DSP's, vector processors, and so on. GP on GPU's started dominating this market but there's still simple accelerators on the market.

2. RISC multicore CPU's plus accelerators. My favorite of this bunch is Cavium's Octeon II/III just due to the combo of straight-forward cores, good I/O, and accelerators for stuff we use all over the place. Cranking out mass market, affordable version of that is a great start at countering Intel/AMD's mess. That IBM, Oracle, and recently Google maintained RISC components will help a transition.

3. "Semi-custom." AMD led the way w/ Intel following on modifications to their CPU's for customers willing to pay. All we know is there's a ton of money going into this. Who knows what improvements have been made. Potential here is to roll back some of the chaos, keep ISA compatibility with apps, and add in accelerators.

4. FPGA's w/ HLS. Aside from usable HLS, I always pushed for FPGA's to be integrated as deep with CPU as possible for low-latency, high performance. SGI led by putting it into their NUMAlink system as RASC. My idea was it being in NOC on CPU. Intel just bought Altera. Now I'm simply waiting to see my long-term vision happen. :)

5. Language-oriented machines with an understandable, hardware-accelerated model for SW. The Burroughs B5000 (ALGOL machine) and LISP machines got this started. Only one still in server space is Azul Systems Vega3 chips: many-core Java CPU's with pauseless GC for enterprise Java apps. There's a steady stream of CompSci prototypes for this sort of thing with lots of work on prerequisite infrastructure like LLVM. Embedded stays selling Forth, Java and BASIC CPU's. Sandia's Score processor was a high assurance Java CPU that did first-pass in silicon. Plenty of potential here to implement a clean model and unikernel-style OS while reusing silicon building blocks like ALU's from RISC or x86 systems for high-performance. Intel won't do it because of what happened to i432 APX and i960. They're done with that risk but would probably sell such mods to a paying customer of semi-custom business.

6. Last but not the least: all the DARPA, etc work in exascale that basically removes all kinds of bottlenecks and bad craziness while adding good craziness. Rex Computing is one whose founder posts here often. There's others. Lots of focus on CPU vs memory accesses vs energy used.

So, these 6 counter the main trends of biggest CPU's with a variety of success in the business sector. Language-oriented stuff barely exists while RISC + accelerators, semi-custom, and SIMD/MIMD via GPU's say business is a boomin'. Old guard's voices weren't entirely drowned out. There's still hope. Hell, there's even products up for sale and Octeon II cards are going for $200-300 on eBay w/ Gbps acclerators on them. After I knock out some bills, I plan to try another model to get away from the horrors of Intel. :)



I think the Azul Vega line has been beaten by commodity x86-64 hardware and it's a legacy system, this bit I just noticed at the end of the company's page on the Vega3 reinforces that (https://www.azul.com/products/vega/):

"Choose Vega 3 for high-capacity JDK 1.4 and 1.5-based workloads. For applications built in Java SE 8, 7 or 6, check out Zing, a highly cost-effective 100% software-based solution containing Azul’s C4 and ReadyNow! technology, optimized for commodity Linux servers."

Much the same fate as custom Lisp processors.


I interviewed Gil Tene at QConLondon this week (founder of Azul and designer of the Vega). One of the advantages was that the hardware transactional memory support was built in, which meant that it could perform optimistic locking on gaining synchronised methods, leading to reduced contention in a mega multi core/multi threaded way. It worked by tracking which memory areas were loaded into the cache like and only permitting write back once the transaction was committed but using the chip's existing cache coherency protocols along the way.

When they pivoted to commodity x86/64 chips the HTM code couldn't be ported over with it due to lack of support with the chips. However Moore's law meant that the Intel processors were faster than the Vega, in the same way that an iPhone is more powerful than the Cray-1 was. So it's still a win.

He was particularly excited in Intel's latest generation of server CPUs which now have this (in the form of the TXC instruction if I remember correctly). He predicted that as these became available they might be re-integrated back into Zing - though of course this was speculation rather than promise.

His talk on hardware transactional memory is here, and the slides/video will be available later on InfoQ:

https://qconlondon.com/presentation/understanding-hardware-t...

Obviously the talk/interview isn't there yet (was only done Mon:Tue this week) but this will be where it is in the coming months:

http://www.infoq.com/author/Gil-Tene

Disclaimer: I believe that Azul was a sponsor of QCon, but I am writing this and interviewed Gil because I'm a technology nerd and write for InfoQ because I want to share. Apologies if this comes over as an advert, which is not the intent.


Shit... doesn't surprise me. Except that's not a total loss. We don't know why it lost in the market. I'm guessing it's because a niche player tried to differentiate against Intel's silicon on performance and something they optimize for. That's fail for sure.

However, differentiating on acceptable performance with better reliability, analysis, security, and so on might work. The embedded stuff does this with success in terms of Java CPU's and high-security VM's. An old trick that might help is to implement the CPU at microcode on top of ultra-optimized stuff like Intel's while hiding the ugliness. Basically, the semi-custom stuff again. A lot of potential there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: