I adore my Linux setup and have switched back to it after using M1 Pro for 3 years.
But through all the Dells, Thinkpads and Asus laptops I've had (~10), none were remotely close to a full package that MBP M1 Pro was.
- Performance - outstanding
- Fan noise - non-existent 99% of the time, cannot compare to any other laptop I had
- Battery - not as amazing as people claim for my usage, but still at least 30% better
- Screen, touchpad, speakers, chassis - all highest tier; some PC laptops do screen (Asus OLED), keyboard and chassis (Thinkpad) better, but nothing groundbreaking...
It's the only laptop I've ever had that gave me a feeling that there is nothing that could come my way, and I wouldn't be able to do on it, without any drama whatsoever.
It's just too bad that I can't run multiple external displays on Asahi...
(For posterity, currently using Asus Zenbook S16, Ryzen HX370, 32GB RAM, OLED screen, was $1700 - looks and feels amazing, screen is great, performance is solid - but I'm driving it hard, so fan noise is constant, battery lasts shorter, and it's just a bit more "drama" than with MBP)
Yes, this is the true dividing factor for me. The battery life of the new ARM laptops is an astounding upgrade from any device I have ever used.
I've been a reluctant MacBook user for 15 years now thanks to it being the de-facto hardware of tech, but for the first time ever since adopting first the M1 Pro and then an M2 Pro I find myself thinking: I could not possibly justify buying literally any other laptop so long as this standard exists.
Being able to run serious developer workflows silently (full kubernetes clusters, compilers, VSCode, multitudes of corpo office suite products etc), for multiple days at a time on a single charge is baffling. And if I leave it closed for a week at 80% battery, not only does that percentage remain nearly the same when resumed-- it wakes instantly! No hibernation wake time shenanigans. The only class of device which even comes close to being comparable are high end e-ink readers, and an e-ink reader STILL loses on wake time by comparison.
I'm at the point now where I'm desperately in need of an upgrade for my 8 year old personal laptop, but I'm holding off indefinitely until I discover something with a similar level of battery performance that can run Linux. As I understand it, the firmware that supports that insane battery life and specifically the suspend functionality that allows it to draw nearly zero power when closed isn't supported by any Linux distro or I would have already purchased another MacBook for personal use.
Excellent power efficiency in apple silicon - good battery life and good performance at the same time. The aluminum body is also very rigid and premium feeling, unlike so many creaky bendy pc laptops. Good screen, good speakers.
Aluminum and magnesium non-Apple laptops are just as stiff. There's just a wider spectrum of options, including $200 plastic ARM Chromebooks available.
I’ve never heard someone describe the aluminum body as bad.. what do you not like about it?
The number one benefit is the Apple Silicon processors, which are incredibly efficient.
Then it’s the trackpad, keyboard and overall build quality for me. Windows laptops often just feel cheap by comparison.
Or they’ll have perplexing design problems, like whatever is going on with Dell laptops these days with the capacitive function row and borderless trackpad.
The keyboard and body are not bad at all - rather, they're best in class, and so is the rest of the hardware. It is a premium hardware experience, and has been since Jony Ive left, which is what makes the software so disappointing.
I believe there are a few all-metal laptops competing in the marketplace but was unaware they were actually better than the apple laptops ... what all aluminum laptops are better and how are they better ?
I just turn off trackpads, I'm not interested in that kind of input device, and any space dedicated to one is wasted to me. I use nibs exclusively (which essentially restricts me to Thinkpads).
My arms rest on the body, the last thing I want is for it to be a material that leeches heat out of my body or that is likely to react with my hands' sweat and oils.
Strawman. Because Apple designed it well. Metal’s not an issue. My legacy 2013 MacBook Air still looks and feels and opens like new.
I was looking at Thinkpad Auras today. There are unaligned jutting design edges all over the thing. From a design perspective, I’ll take the smooth oblong squashed egg.
Every PC laptop I’ve touched feels terrible to hold and carry. And they run Windows, and Linux only okay. Apple
MacBooks are a long mile better than everything else and so I don’t care about upgraded memory — buy enough ram at purchase time and you don’t have to think about it again.
Memory upgrades aren’t priced super well, granted, but I could never buy HP Dell Lenovo ever again. They’re terrible. I’ve had all of them. Ironically the best device I’ve had from the other side was a Surface Laptop. But I don’t do Microsoft anymore. And I don’t want to carry squeaky squishy bendy plastic.
Most of all, I’m never getting on a customer support call with the outsourced vendors that do the support for those companies ever ever ever again. I’ll take a visit to an Apple store every day of the week.
If the Macbook has a bad keyboard (ignoring the Butterfly switches, which aren't on any of the M series machines, which are the ones people actually recommend and praise), then the vast majority of Windows machine have truly atrocious keyboards. I prefer the keyboard on my 2012 Macbook to the newer ones, but it's still better than the Windows machines I can test in local stores.
I prefer the aluminium to the plastic found on most Windows machines. The Framework is made from some aluminium alloy from what I know, and I see that as a good thing.
The soldered RAM sucks, but it's a trade-off I'm willing to make for a touchpad that actually works, a pretty good screen, and battery life that doesn't suck.
> "I never understood why people claim the Macbook is so good."
Apple's good enough for the average consumer, just like a 16-bit home computer back in the day. Everyone who looks for something bespoke/specialized (e. g. certified dual- or multi-OS support, ECC-RAM, right-to-repair, top-class flicker-free displays, size, etc.) looks elsewhere, of course.
- spin loop engine, could properly reset work available before calling the work function, and avoid yielding if new work was added in-between. I don't see how you avoid reentrancy issues as-is.
- lockfree queue, the buffer should store storage for Ts, not Ts. As it is, looks not only UB, but broken for any non-trivial type.
- metrics, the system seems weakly consistent, that's not ideal. You could use seqlocks or similar techniques.
- websocket, lacking error handling, or handling for slow or unreliable consumers. That could make your whole application unreliable as you buffer indefinitely.
- order books; first, using double for price everywhere, problematic for many applications, and causing unnecessary overhead on the decoding path. Then the data structure doesn't handle very sparse and deep books nor significant drift during the day. Richness of the data is also fairly low but what you need is strategy-dependent. Having to sort on query is also quite inefficient when you could just structure your levels in order to begin with, typically with a circular buffer kind of structure (as the same prices will frequently oscillate between bid and ask sides, you just need to track where bid/ask start/end).
- strategy, the system doesn't seem particularly suited for multi-level tick-aware microstructure strategies. I get more of a MFT vibe from this.
- simulation, you're using a probabilistic model for fill rate with market impact and the like. In HFT I think precise matching engine simulation is more common, but I guess this is again more of a MFT tangent. Could be nice to layer the two.
- risk checks, some of those seem unnecessary on the hot path, since you can just lower the position or pnl limits to order size limits.
That’s a fair point, and I agree on wire-to-wire (SOF-in → SOF-out) hardware timestamps being the correct benchmark for HFT.
The current numbers are software-level TSC samples (full frame available → TX start) and were intended to isolate the software critical path, not to claim true market-to-market latency.
I’m actively working on mitigating the remaining sources of latency (ingress handling, batching boundaries, and NIC interaction), and feedback like this is genuinely helpful in prioritizing the next steps. Hardware timestamping is already on the roadmap so both internal and wire-level latencies can be reported side-by-side.
Appreciate you calling this out — guidance from people who’ve measured this properly is exactly what I’m looking for.
That number is for a non-trivial software path (parsing, state updates, decision logic), not a minimal hot loop. Sub-100 ns in pure software usually means extremely constrained logic or offloading parts elsewhere. I agree there’s room to improve, and I’m working on reducing structural overheads, but this wasn’t meant to represent the absolute lower bound of what’s possible.
It does not. If this was the case, round trip wire to wire latency below 1.0-1.2 microseconds in software would’ve been impossible. But it clearly is possible - see benchmarks by Solarflare, Exablaze, and others.
13 interviews suggests he was interviewing for multiple roles within the same company; in which case it's not that shocking. In many places every team runs their own interviews.
It might be nicer to go work for startups, acquire experience there as you build everything from scratch across the whole stack, then get hired at a high responsibility position.
Though most people into entrepreneurship never go back to big corporations usually.
>acquire experience there as you build everything from scratch across the whole stack
This is not usually how it works. In fact in my experience, the moment a company becomes a scaleup and brings new leadership in to handle growth, those people start getting rid of the hacky jack of all trades profiles.
Larger companies usually value specialized profiles. They don’t benefit from someone half assing 20 roles, they have the budget to get 20 experts to whole ass one role each.
Career paths in large companies usually have some variation of “I’m the go-to expert for a specific area” as a bullet point somewhere.
Smaller companies necessarily have a small team stretched across broad responsibilities, that usually describes startups. If it's scaling up then yeah, that changes. You want to join small teams for broad experience, startup or regular business.
There are times where a big company needs to build something new (albeit within a constrained ecosystem and a very narrow swimming lane).
To do so, one good way is to hire the experts of that domain that have built it before. That can mean acquiring a small specialized company, or simply hiring its top talent.
You could also repurpose your existing staff, but a big company is unlikely to have a lot of "builders", as most of its staff is just iterating and maintaining things others have built a decade ago. You probably still want to have some of those people in the team anyway, for integration purposes.
It doesn't even take new leadership. As companies grow, they (have to) put more process in place, people tend to have narrower and more tightly defined responsibilities, and the person at a smaller company--even if not a startup--who was cowboying what they saw as needing doing can become a liability rather than an asset.
Big tech companies are also notorious for down-leveling if you’re not coming from another big company, so it might not actually be that good of a move.
He was down-leveled to a first level manager at the company you are at? He accepted this? Why? Do you think he / the new company chose wisely? What ended up happening?
I’m not sure why he accepted it, I never pried too much. It was his first big tech job. It’s very possible he still made more money as a first-level manager, so it might’ve still been a net win for him.
He was a great manager, he’s since moved up the ranks but he’s still at the same big tech co. So from both the company’s and his perspective, I suppose everyone’s happy.
Wouldn't be surprised if it was money. My family member runs a software company, salaries came up recently and found out I make as much as their director.
I agree. My point is this is probably unrealistic:
> It might be nicer to go work for startups, acquire experience there as you build everything from scratch across the whole stack, then get hired at a high responsibility position
You mostly don’t get hired into high responsibility positions at big tech from startups, unless you’re acquired by them directly.
There are some notable exceptions obviously, but those generally require you to be some sort of leading domain expert.
It depends on how many people he was in charge of. If he’s CTO of 500 people company where only 40 are engineers, you’re not getting past senior manager at faang.
Most of my titles have been pretty made-up (with acquiescence of manager). Never had the formal levels seen at large tech companies. Last job description was written for me and didn't even make a lot of sense if you squinted to hard. Made a couple of iterations for business cards over time.
Couldn't have told you what the HR titles were in general.
It doesn’t work like that. An “architect” at a small startup will get you maybe to a mid level position at BigTech if you pass the coding interview. The scale is completely different.
And those “entrepreneurs” usually make less than a senior enterprise dev working in a 2nd tier city or a new grad at BigTech.
My understanding is that S3 egress is only a problem if you need to take data out of AWS, which you can simply avoid by having some kind of dedicated AWS direct connect or some such to route the traffic yourself?
Connecting to an AWS egress point for direct connect reduces the egress price (about half) but doesn't eliminate it. It also costs thousands of dollars a month just to have the connection, so it's not great for small operations. :-/
If I'm reading the description correctly, egress from there is 2 cents/GB, while the regular price for egress (less than 10 TB) from eu-south-1 is 9 cents/GB.
My experience is that anything involving Bazel is slow, bloated, and complicated, hammers your disk, copies your files ten times over, and balloons your disk usage without ever collecting the garbage. A lot of essential features are missing so you realistically have to build a lot of custom rules if not outright additional tooling on top.
I'm not too surprised that out of the box docker images exhibit more of this. While it's good they're fixing it, it feels like maybe some of the core concepts cause pretty systematic issues anytime you try to do anything beyond the basic feature set...
Seconded. I tried hard to use Bazel in a polyglot repo because I really wanted just one builder.
Unfortunately, the amount of work you need to just maintain the build across language and bazel version upgrades is incredibly high. Let alone adding new build steps, or going even slightly off the well-trodded path.
I feel like Bazel would need at least 5 more full-time engineers to eventually turn it into an actually usable build tool outside Big Tech. Right now many critical open source Bazel rules get a random PR every now and then from people who don't actually (have time to) care about the open source community.
My go-to now is to use mise + just to glue together build artifacts from every language's standard build tools. It's not great but at least I get to spend time on programming instead of fixing the build.
Yes, each of the big techs has teams that just work on the build systems, however it should also be noted that none of the big tech use the open source Bazel, Google uses Blaze internally which is what Bazel is derived from, Amazon uses Brazil which has nothing to do with Bazel and Meta uses Buck, which I know nothing of so I won't comment on it.
The major issue I found when trying to use Bazel was that its essentially a build system without specific rules for each language, hence rules support for each specific language is dependant on each language's specific community, most of which are quite tiny, and mostly maintained by upstreaming changes from their individual companies, servicing their own needs, hence a lot of work is required to make it work for your own company's needs.
I tried Basel, Buck2 and Pants for a greenfield mono repo recently, Rust and Python heavy.
Of the three, I went with Buck2. Maybe just circumstance with Rust support being good and not built to replace Cargo?
Bazel was a huge pain - broke all standard tooling by taking over Cargos job, then unable to actually build most packages without massive multi-day patching efforts.
Pants seemed premature - front page examples from the docs didn’t work, apparently due to breaking changes in minor versions, and the Rust support is very very early days.
Buck2 worked out of the box exactly as claimed, leaves Cargo intact so all the tooling works.. I’m hopeful.
Previously I’ve used Make for polyglot monorepos.. but it requires an enormous amount of discipline from the team, so I’m very keen for a replacement with less foot guns
Any readily available build system is more of a meta-language onto which you code your own logic, but with limited control and capabilities. Might as well take control of the whole stack in a real programming language.
Building my own build system lets me optimize my workflow end-to-end, from modular version management, packaging and releasing, building and testing, tightly integrating whatever tool or reporting I want, all seamlessly under the same umbrella.
I mostly do C++, Assembly, eBPF, Python (including C++ Python modules), and multi-stage codegen on Linux, so I haven't really looked at the complexity of other languages or platforms.
rules_oci (and bunch of rules_* under bazelbuild / bazel-contrib org on GitHub) is Bazel recommeded rule sets.
I don't agree with your parent comment about Bazel, but your comment is not fair too. Bazel tries to be better build tool so it took on responsibility on registry / rules_* and get critics for it is a fair game.
The "bloated Bazel" blame is not fair too, but I think somewhat understandable. If you ever going to only do JavaScript, bun or other package manager is enough and "lighter-weight". Same goes to uv + Python bundle. Bazel only shines if you are dealing with your C++ mess and even there, people prefer CMake for reasons beyond me.
Bad keyboard, bad aluminium body, soldered ram...
Is it just the Apple Silicon that somehow makes it worth it? It's ARM, most software is still written and optimized for x86.
reply