Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel’s Meteor Lake Chiplets, Compared to AMD’s (chipsandcheese.com)
114 points by rbanffy on Sept 12, 2022 | hide | past | favorite | 23 comments


I wasn't familiar with the difference between a chiplet compared to a typical microprocessor.

In case you also find this information helpful:

> A chiplet is an integrated circuit block that has been specifically designed to work with other similar chiplets to form larger more complex chips. In such chips, a system is subdivided into functional circuit blocks, called "chiplets", that are often made of reusable IP blocks.

https://en.wikichip.org/wiki/chiplet

https://en.m.wikipedia.org/wiki/Chiplet


You can also think of this like a return to the earlier designs with the CPU separated from the northbridge; there's a chiplet for cpu cores, and another that serves the northbridge function (memory, pci-e, etc), looks like intel has one for iGPU, and a fourth that does I can't tell what (more I/O, I think?)

Smaller dies increase yield, and multiple dies allow you to use different processes for different dies. The I/O may not benefit as much from the cost of the smallest process and the smallest process is often production limited, so doing that part of the design in older processes saves money and increases production; it makes the whole thing larger, but pin count is dictating package size at the moment.


Or to "slot CPUs" of the P3 era, where the CPU and cache management were shipped on the same PCB. Or to still earlier designs where multiple VLIW chips were designed in tandem to act as a single component. The original IBM RS/6000's had (IIRC) a six-chip "CPU". There are comparatively few new ideas in semiconductor packaging.


Now you made me remember Unisys’ Micro-A. It was an A-series mainframe on an ISA board. The CPU had 8 chips in a single MCM (multi-chip module, as they called it) IIRC.



You can also think of it as a bunch of very small components (CPU, IO, HBM) on a very small PCB that's more or less another larger chip, but with larger features as well (so that it doesn't pay the yield penalties of a large area on a bleeding edge process).


It is an interesting technology. It is basically a packaging innovation, sounds pretty boring I guess, but larger chips have a super-linearly bad effect on yield, so breaking up the design into chiplets is very nice.

I wonder if anyone knows of a good place to look up Intel vs TSMC yield numbers or defect density?


I thought chiplet is about not doing a large monolithic chip, but making a couple of small ones and connecting them together. Smaller chips improve yield.


That is part of it, but it's not the full story. In earlier multi-core CPUs each core was basically its own chip with its own resources that could operate nearly fully independently from the other cores (although not completely independently as you still need some cache coherency mechanism and usually there would be at least a shared last-level cache). In the chiplet model the chiplets are less independent and use more shared resources, e.g. the chiplets might be designed so the memory controllers are separate from the chiplets themselves and shared between multiple chiplets.

There's obviously kind of a gray area and what makes something a chip or a chiplet isn't necessarily precisely defined, but the name "chiplet" implies a model with smaller cores that use more shared resources.


Chiplets also drastically improve binning since you no longer need all 16 cores on a monolithic die to run at for example 5ghz, you just need to find 4, 4 core chiplets that run at 5ghz.

It effectively allows you to mix and match parts (to a degree) to make the highest performing cpu.


My brain is so used to the term chipset it refused to read the headline correctly at first.


Buy some gum and wait till northbridges start using chiplets too, you'll comparing chipset's chiplets chewing chiclets.


The gpu being connected via iCXL is super interesting.

I feel like Intel's discrete gpus have a much much stronger chance to stick around grow & evolve if the "onboard" gpu is nearly a somewhat smaller discrete chip, using potentially the same slightly-better-than-pcie connectivity a discrete board uses.

Reciprocally it'll be interesting to see how AMD powers down & takes InfinityFabric mobile. I didnt know their mobile lineup was eventually moving off monolithic!

Another superb chips & cheese writeup. These folks make me feel like it's two decades ago & we're not afraid to go deep on architecture again. Envigorating; love it!


A few days ago, Moore's Law is Dead reported that Intel has internally decided to kill off the discrete Arc chips.

AFAIK they're still planning to develop iGPUs, so maybe Intel can spend some time quietly building the HW and SW for a proper dGPU sometime in the future.

Although it's anyone's guess how long it will take for anyone to believe Intel about dGPUs again after the Arc debacle.

EDIT: apparently Intel is claiming that the Arc dGPU has not been cancelled. So I guess we'll just have to wait and see. MLID is usually pretty reliable, but I doubt Raja wants to be charged with misleading investors.


There have been claims for months that it has been killed off. I think it's just astroturfing from competitors.


My impression was that MLID does a pretty good job of vetting his leaks.

That said, yesterday I came across some subreddit where the consensus was that MLID is very unreliable. So I guess I'll just need to see how this plays out.

Still, I notice that Raja's tweeted response doesn't clearly reject the rumor. That seems suspicious to me.


Is it just me, or is the SoC tile way larger than anyone expected? From TFA:

> A SoC tile contains most of the functionality found in the system agent in current Intel client CPUs.

Given that this tile utterly dwarfs the CPU cores themselves, what all is going on in there? Why is it so enormous? How much of it do we understand?


I don't if Intel is doing the same, but AMD's chiplet-based Zen 2 and Zen 3 processors use a less dense manufacturing process (12nm) for their IO-die than for their CPU-die (7nm), which results in the IO-die being comparable large.


The uncore has been growing every generation so this is not new. I've wondered why it's so large myself. Intel has the audio DSP, ISP, GNA, and now VPU that mostly aren't used so maybe they account for the bloat.


I'm not a semiconductor engineer so I could be totally off here but.

My guess is that for AMD and likely Intel the reason the IO die is so large is because of all the PHY's on it (DDR, USB, etc) which would give a minimum size of the IO die for chiplets.

Which if true explains why you wouldn't care as much about the process node for the IO chiplet as it has a minimum physical size anyway.


iGPU, video de/encoder, USB4, I'm not surprised it's larger.


The GPU is on its own tile


So which is better?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: