I still remember the shock when my father told me he had connected his laptop to the internet without a cable. I'd heard of wireless networking but didn't know it was a standard feature in laptops at the time and all you need is to find a wifi point.
Does that mean the quality of the voice calls in that era was better than later systems? Since it's logical to have loss of quality when a weak signal is amplified.
Right, quality was poor and the transmitter and receiver were part of the problem. They experimented with different methods, but part of the problem was that without amplification they needed a transmitter that worked at the full voltage necessary to travel the full distance.
This is great news for the entire ARM ecosystem. The fact that ARM is now exceeding the best x86 CPUs marks a historical turning point, and other manufacturers are sure to follow suit.
> The fact that ARM is now exceeding the best x86 CPUs marks a historical turning point, and other manufacturers are sure to follow suit.
Haven't they been playing leap frog for years now? I avoid the ARM ecosystem now because of how non-standardized BIOS is (especially after being burned with several different SoC purchases), and I prefer compatibility over performance, but I think there have been some high performance ARM chips for quite some time
I came to realize by soldering a lot of fast ram on to the board of newer laptops and phones, maybe it's not the instruction set that matters that much.
Modern Apple hardware has so much more memory bandwidth than the x86 systems they're being compared to - I'm not sure it's apples to apples.
A19 has WAY less bandwidth on its 64-bit bus than desktop chips with 128-bit busses . AMD’s strix halo is also slower despite a 256-bit bus.
Pushing this point further, x86 chips are also slower when the entire task fits in cache.
The real observation is how this isn’t some Apple black magic. All three of the big ARM core designers (Apple, ARM, and Qualcomm) are now beating x86 in raw performance and stomping them in performance per watt (and in performance per watt per area).
It’s not just apples deep pockets either. AMD spent more in R&D than ARM’s entire gross profit margin last I checked. Either AMD sucks or x86 has more technical roadblocks than some people like to believe.
Spot on about the memory, even some of the actual M* models don't have all that high of memory bandwidth and still kick ass just as well as the ones that do in this kind of benchmark.
I do feel like x86 has more technical roadblocks, but disagree the amount of investment is not the primary driving factor at this point. I haven't seen designs from ARM itself beat x86 on raw performance yet, and 100% of their funding goes towards this point. E.g. the X925 core certainly doesn't, nor does the top single core Android device on e.g. Geekbench come close to current iOS/PC device scores. They've announced some future shipping stuff like the C1 which is supposed to, but now we're talking marketing claims about upcoming 2026 CPUs vs Zen 5 from 2024. Perf/Watt wise absolutely of course, that ship sailed long ago. Z1/Z2 were admirable attempts in that regard, but still a day late and a dollar short to leading ARM designs.
The other factor to consider is scale-out CPUs with massive DC core counts tend to have mediocre single core performance, and that's what AMD really builds Zen for. Compare to Graviton in the DC and AMD is actually performing really well in both single/multi performance, perf/watt, and perf/dollar. It just doesn't scale down perfectly.
Apple/Qualcomm have certainly dumped more R&D into their cores being low core count beasts, and it shows vs any competition (ARM or x86). The news likes to talk about how many of the Nuvia developers came from working on Apple Silicon, but I think that is a bit oversold - I think it's mostly that those two development programs had a ton of investment targeting this specific use case as the main outcome.
x925 does according to GeekerWan. C1 Ultra is even faster. x86 GB6 results are from the Geekbench website. I searched for the fastest overall scores I could find in the first few pages of GB6 results to steelman as best as possible.
The long and the short is that x86 is WAY behind in every way. The chips are larger, hotter, and slower too. If the rumored A19 Pro in a $500 laptop happens, it's going to absolutely crush the wintel market.
The stuff about Graviton is missing a key element too. Look at the X3 scores below. They are around 30% slower than the x86 competitors. This is what Graviton 4 is using (Neoverse V2 is based on X3). Neoverse V3 was announced almost 2 years ago now and is based on X4 which is a pretty big jump. I'd expect Neoverse V4 in Feb 2026 to be based on either X925 or C1 Ultra. When these newer, faster cores hit the market, they will be beating x86 in cost (the cores are smaller) and power consumption if not peak performance too.
I talked to a guy who'd worked at Apple on the chips. He more or less said the same thing, it's the memory that's all the difference.
This makes a lot of sense. If the calculations are fast, they need to be fed quickly. You don't want to spend a bunch of time shuffling various caches.
Memory bandwidth/latency is helpful in certain scenarios, but it can be easily oversold in the performance portion of the story. E.g. the 9950X and 9950X3D are within less than 1/20th of a percentage point of each other in PassMark Single thread (feeding a single core is dead easy) but have a spread of ~6.4% (in favor of the 9950X3D) in the multi-thread (where the cache is starting to help on the one CCD). It could just as easily have been in the other direction or 10 times as much depending on what the benchmark was trying to do. For most day to day user workloads the performance difference from memory bandwidth/latency is the "nil to some" though.
Meanwhile the AI Max+ 395 has at least twice the bandwidth + same number of cores and comes to more like a ~15% loss on single and ~30% loss on multithread due to other "traditional" reasons for performance difference. I still like my 395 though, but more for the following reason.
The more practical advantage of soldered memory on mobile devices is the power/heat reductions, same with increasing the cache on e-cores to get something out of every possible cycle you power rather than try to increase the overall computation with more wattage (i.e. transistors or clocks). Better bandwidth/latency is a cool bonus though.
For a hard number the iPhone 17 Pro Max is supposed to be around 76 GB/s, yet my iPhone 17 Pro Max has a higher PassMark single core performance score than my 9800X3D with larger L3 cache and RAM operating at >100 GB/s. The iPhone does have a TSMC node advantage to consider as well, but I still think it just comes out ahead due to "better overall engineering".
It's very possible I am misinterpreting, but the A19 seems to have less total memory bandwidth than, say, a 9800x (but not by much). And far less than the Max and Ultra chips that go into MacBooks.
So I think there's more to it than memory bandwidth.
X86 compete based on clock speed for the longest time so they use cell libraries designed for higher that. This means the transistors are larger and less dense. Arm cores are targeted at energy efficiency first so they use denser cells that doesn’t clock as fast. The trade off is they can have larger reorder buffers and larger scheduling windows to squeeze better ipc. As frequency scaling slows but not so much density scaling you get better results going the slower route.
The book is a theoretical and practical guide to understanding the principles of programming languages. Unlike books that teach a single language for application development, this one focuses on the semantics, syntax, and core concepts that are common across languages. It uses Scala as the main teaching language to build interpreters and type checkers, but its goal is not to teach Scala itself; rather, Scala is a tool to explore universal programming language principles.
The book covers key programming language features such as immutability, functions, pattern matching, recursion, mutation, garbage collection, lazy evaluation, continuations, type systems, algebraic data types, and polymorphism. It introduces these by first presenting them in simplified “toy” languages and then showing how to implement interpreters and type checkers for them. This approach ensures readers understand not just how to use language features, but why they work and what rules govern them across programming languages.
Its importance compared to other programming books lies in its generalization. Most beginner programming books teach one specific language (e.g., Python, Java, C++) and focus on syntax and usage. This book instead equips readers with the foundational concepts of programming languages so that they can more easily learn any new language in the future. By separating syntax (surface-level appearance) from semantics (underlying meaning), it teaches readers to recognize the deep commonalities among languages, making it a valuable resource for students, researchers, and advanced programmers aiming to go beyond coding into programming language theory and design.
Implementing a programming language is very easy when you have pattern matching and algebraic data types (which implies types). Python, JS, and C lack some or all of these. Scala is not special in this regard, though. Any modern language with FP ancestry will have these features.
So its good because of Scala's ability to implement a programming language and not Scala's language itself being good on its own for general abstract sudo like code. Or does Scala write cleanly (like people say Python does) and is also elegantly able to expands it's own simple syntax?
It bugs me how languages like JS are clean but any custom domain-specific language made in them is ugly.
For the authors of this book, I imagine it was a combination of Scala having the features they wanted, plus it being relatively popular, easy to install, and having good tooling.
I've written a lot of Scala in the last 15 years or so, and I really like the language. It has features that most programmers are not familiar with, which can scare off some. My opinion is that if you understand the features it is a very elegant and simple language, particularly Scala 3.
It was moved to a new path so you have to create symlinks in both locations to the new path under /usr/lib/newlib1.2/newfile.so otherwise, You can download a script that will make that for you but it will only work if you have all the dependencies for that script installed and their version numbers match the ones that the script owner had when he wrote the script.