Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Thankfully Intel and AMD are doing pretty good in performance department with their latest chips and it's only going to get better.

I often wonder if AMD was able to use TSMC's cutting edge fabrication node how would their laptop chips compare with the M1. Apple uses TSMC's best node as I understand it.



I think the performance wise Intel 12th gen and Ryzen 6000 are already a little ahead. Power draw will be helped by moving to TSMCs latest node but general purpose x86 will always have some power disadvantage compared to specialized ARM hardware and software that Apple makes.


Yes, x86 will always have some power disadvantage because ARM's heritage is low power embedded devices (and RISC). x86 has other advantages like a very mature and optimized software stack with good compilers.

Apple also has the advantage of cramming every thing on a single piece of silicon while AMD has gone in for the chiplet approach. The single piece of silicon reduces yields but increases performance with a lower power draw. The chiplet approach followed by AMD is more modular, less risky and cost effective.

So if both AMD and Apple use the same TSMC node _and_ AMD went in for "cost is not an objective" and cram everything on a single piece of silicon _and_ add HBM (aka unified memory) it would be real interesting for the two to go "head to head".

I would definitely be interested in paying good dollar for such X86 client system !

I hope someone at AMD is listening !


It's all about tradeoffs isn't it. Can't have diverse, extensible, open ecosystem like x86 and get every ounce of performance and power efficiency - something's gotta give. But the good news is you can get pretty close with great Engineering and competition keeps that up. Maybe one of the many x86 vendors will build such a system - Lenovo and Microsoft are working with AMD on their new custom designed Thinkpad Z series lineup and I hear good things about it.


Apple has had a lead on using TSMC's latest process. However the lead in iGPU performance and perf/watt is a fair bit larger than you'd expect from the process differences.

I'm still puzzled why during a long GPU shortage where supply was short and prices insane that nobody in the x86-64 world managed a > 128 bit wide memory interface to benefit of an iGPU. Apple desktops and laptops have options for 256, 512, and even 1024 bit (on the studio) wide memory systems.


Arm has some structural advantage in the decoding stage (instructions have the same length) and a huge lead in low power systems they got by heritage, not to mention less historical baggage (x86_64 cpus still have a 16 bit mode and a 80 bit fpu)

I think the lead the M1 has is bigger than what can be attributed to the different node.


AMD does use the same node as the Apple now. But that will most likely change with the M2 pro or M3 at the latest.


Far as I know Ryzen 6000 chips use 6nm node - 7000 series will use the 5nm node and they are not out yet?


Ah yes. You are right! It does use 6nm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: