Hacker Newsnew | past | comments | ask | show | jobs | submit | montroser's commentslogin

Then this is for the handful of cases for you. When it matters it matters.

I remember those Cyrix chips well. We had a little shop where we would assemble boxes to spec. And hey, a 486 is a 486, we reasoned. They were cheap, ran cool, and just about as fast as the others.

Me too, and I recall the Cyrix "Pentium-like" chips were cheaper and faster than Intel's actual Pentium chips! [1]

[1] https://liam-on-linux.livejournal.com/49259.html


For the vast, vast majority of use cases, they are faster, yes.

Cyrix chips get too much hate because of Quake being optimized specifically for the Pentium and its FPU.


Cyrix 686 166 PR200 was flying in Linux.

Cost of doing business...

This. Meta made $60B in net income in 2025.

Proportionally, it's as if an individual who makes $60K a year gets a speeding fine of $375. It might be moderately annoying, but it's not really going to be remembered in a month.

Has anyone in leadership at Meta faced even the prospect of jail time for what they've done over all these years?

they will get congressional medals of honor sooner than that

If you can make 60B and occasionally pay a few hundred million in fines, the math kind of answers itself

"We went a little over the line to figure out where the line is, so, we can now guarantee you, dear shareholder, that we're extracting the absolute maximum possible value! Isn't that splendid!"

More like “we found a company doing business in the EU who has deep pockets. I bet we can get 500 mil from them and they won’t leave.”

Who issued this fine?

If you want to switch back and forth between "these two web things side-by-side" and "something else" over and over, then in that case it's better than two full browser windows side-by-side because they come into the foreground and vice versa as a unit.

It is a bit of a continuation of the somewhat annoying trend of integrating features into apps that should be part of the window manager (tabs, in the first place, for example). This one is extra awkward because even windows (which has spent a lot of time behind on window management) can do two things side by side as the same unit now.

Gosh, my brain just got all fuzzy going through those one after the next. Transitioning from the previous era of CGA to 16 colors was so very exciting at the time.

I feel an ocular migraine coming on looking at these.

On CRT displays, did these not cause visual problems in the same way? I remember having no trouble looking at these years ago.


I had a friend in ninth grade in the late 1900s who was a talented artist. He used his skills to make beautifully expressive pixel-art hardcore pornography on the TI-82.

He crafted a few different scenes, where for each one, he set it to loop back and forth between two frames -- and the implied motion was fantastically realistic for the resolution and fps he was working with...


> late 1900s

Oh, God, why you gotta do us like that?


What was the first letter of the city in which that happened?

Let me print that and send to my representative and request the immediate and hardcore age verification laws for the advanced calculators.

It's shocking to see what the bored teenager with some pixels can do.

....and while we're at it, pens, pencils and paints should be banned too, just in case - your friend could draw on paper after all.


Reminds me of cpanel, from the late 1900s: https://en.wikipedia.org/wiki/CPanel

My own early sysadmin experience was with Ubuntu eBox and I hated it. Because none of the expected configuration files or commands you would find on Stackoverflow would work on a eBox managed server. You would do configuration through the UI or nothing.

The debugging was also impossible, because logs were not in the expected places and standard grep on log and conf files would give you nothing.

Cockpit is way better than that. Partially because of systemd, but also dbus and other relatively new APIs in the Linux plumbing layer, which finally allowed us to implement consistent and stateless management UI of a system.


Trigger warning. This is taking me back to when I ran my own "web hosting provider" on a PIII with 128mb of ram back in the early 2000s (I was 13).

Late 1900s :( I still have to deal with CPanel as I have friends hosting their sites on GoDaddy which uses it extensively in 2026...

Came here to say this too.

This is as much an indictment of AWS compute as it is anything else.


Kinda comparing apples to oranges. AWS was using EBS and not local instance storage. So you’re easily looking at another order of magnitude latency when transmitting data over the network versus a local pcie bus. That’s gonna be a huge factor in what I assume is a heavy random seek load.


I wrote a longer comment already (https://news.ycombinator.com/item?id=47352526) but looking at the hot run performance and making big hand wavy guesses, the performance difference might not be as big as you'd expect.


But AWS beat the laptop? And there's no cost to performance analysis? Yes AWS is overpriced but how do you make that conclusion from this specific article? Because network disks were slower than SSDs? AWS also has SSD instances with local storage.


I haven't tried the newer I7i and I8g instance types (the newest instances with local storage) for myself, but AWS claims "I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances."

I benchmarked I4i at ~2GB/s read, so let's say I7i gets 3GB/s. The Verge benchmarked the 256GB Neo at 1.7GB/s read, and I'd expect the 512GB SSD to be faster than that.

Of course, an application specific workload will have its own characteristics, but this has to be a win for a $700 device.

It's hard to find a comparable AWS instance, and any general comparison is meaningless because everybody is looking at different aspects of performance and convenience. The cheapest I* is $125/mo on-demand, $55/mo if you pay for three years up front, $30/mo if you can work with spot instances. i8g.large is 468GB NVMe, 16GB, 2 vCPUs (proper cores on graviton instances, Intel/AMD instance headline numbers include hyperthreading).


My point is the conclusion can't be made from the article.


The article is literally saying the opposite. Quote:

> Here's the thing: if you are running Big Data workloads on your laptop every day, you probably shouldn't get the MacBook Neo.

> All that said, if you run DuckDB in the cloud and primarily use your laptop as a client, this is a great device


Yeah, this is really about how ludicrously overpriced big cloud is. I’ve got a first gen M1 Max and it destroys all but the largest cloud instances (that cost its entire current market value per month!), at least in compute. It’s a laptop! A decent bare metal server in a rack will destroy any laptop.

It’s staggering. Jaw dropping. Bandwidth is even worse, like 10000X markup.

Yet cloud is how we do things. There’s a generation or maybe two now of developers who know nothing but cloud SaaS.

I watched everyone fall for it in real time.


> I’ve got a first gen M1 Max and it destroys all but the largest cloud instances (that cost its entire current market value per month!)

You're either underestimating how big cloud instances can get or overestimating how much it costs to rent a cloud instance that would beat an M1 Max at any multi-core processing.

According to Geekbench, the M1 Max macbook pro has a single-core performance of 2374 and multicore of 12257; AWS's c8i.4xlarge (16 vCPUs) has 2034 and 12807, so relatively equivalent.

That c8i.4xlarge would cost you $246/mo at current spot pricing of $0.3425/hr, which is, what, 20% of the cost of that M1 Max MBP?

As discussed recently in https://news.ycombinator.com/item?id=47291906, Geekbench is underestimating the multi-core performance of very large machines for parallelizable tasks -- the benchmark's performance peaks at around 12x single-core performance. (I might've picked a different benchmark but I couldn't find another benchmark that had results for both the M1 Max and the Xeon Scalable 6 family.)

If your tasks are _not_ like that, then even a mid-range cloud instance like a 64-vCPU c8i.16xlarge (which currently costs $0.95/hour on the spot market) will handily beat the M1 Max, by a factor of about 4. The largest cloud instances from AWS have 896 vCPUs, so I'd expect they'd outperform the M1 Max by about 50-to-1 for trivially parallelizable workloads. Even if you stay away from the exotic instances like the `u7i-12tb.224xlarge` and stick to the standard c/m/r families, the c8i.96xlarge has 384 vCPUs (so at least 24x the compute power of that M1 Max) and costs $3.76/hr.


> That c8i.4xlarge would cost you $246/mo at current spot pricing of $0.3425/hr, which is, what, 20% of the cost of that M1 Max MBP?

A 5 month ROI on a hardware investment would be excellent, so not sure what you're trying to say here?


5 months is a lot worse than 1 month, which is what the parent claimed.

I agree and disagree, the benefit with cloud is you "don't need to manage it", it scales automatically, redundancy, and automatic backups etc. I do think you are right; in the future there will be more infrastructure as code as cost pressures become more obvious.


Those benefits are at least partly lies though.

The tooling — K8S with all its YAML, Terraform, Docker, cloud CLI tools, etc. — is pretty hideously ugly and complicated. I watch people struggle to beat it into shape just like they did with sysadmin automation tools like Puppet and Chef a decade or more ago. We have not removed complexity, only moved it.

The auto scaling thing is a half truth. It can do this if you deploy correctly but the zero downtime promise is only true maybe half the time. It also does this at greatly inflated cost.

Today you can scale with bare metal. Nobody except huge companies physically racks anymore. Companies like Hetzner and DataPacket have APIs to bring boxes up. There’s a delay, but you solve that by a bit of over provisioning. Very very few companies have work loads that are so bursty and irregular that they need full limitless up and down scaling. That’s one of those niche problems everyone thinks they have.

The uptime promise is false in my experience. Cloud goes down for cluster upgrades and any myriad other reasons just as often as self managed stuff. I’ve seen serious unplanned outages with cloud too. I don’t have hard numbers but I would definitely wager that if cloud is better for uptime at all it’s not enough of an improvement to justify that gigantic markup.

For what cloud charges I should, as the deploying user, receive five nines without having to think about it ever. It does not deliver that, and it makes me think about it a lot with all the complexity.

The only technical promise it makes good on, and it does do this well, is not losing data. They’ve clearly put more thought into that than any other aspect of the internal architecture. But there’s other ways to not lose data that don’t require you to pay a 10X markup on compute and a 10000X markup on transfer.

I think the real selling point of cloud is blame.

When cloud goes down, it’s not your fault. You can blame the cloud provider.

IT people like it, and it’s usually not their money anyway. Companies like it. They’re paying through the nose for the ability to tell the customer that the outage is Amazon’s fault.

Cloud took over during the ZIRP era anyway when money was infinite. If you have growth raise more. COGS doesn’t matter.

Maybe cloud is ZIRPslop.


Not all IaC is Kubernetes.


With cloud, what you're really paying for is flexibility and scalability. You might not need either for your applications. At some startups, we needed it. We sized clusters wrong, needed to scale up in hours. This is something we wouldn't ever be able to do with our own hardware without tons of lead time.

If your application won't ever require more resources than a single server or two, then you are better off looking at other alternatives.


Honestly I think the best path is hybrid with the cloud as DR and sudden load scaling.


Metal with data streamed to cloud and cloud as hot backup is something some people already do.

If the metal dies in a catastrophic way (multiple nodes at once and loss of quorum, catastrophic DC outage, etc.) you spin it up in AWS.


There are dozens of us, friend. I got my license for the challenge and learning that came with. Tuned in to signals near and far but never sent my own voice over the air.


So, what is Motorola's incentive here? I love it, but why are they pursuing this? It's an enterprise / government play around auditable privacy and security?


They know their software and update story sucks, so partnering with a company which promises to handle all that and they have an existing audience means they'll sell a lot more of that model.


My guess is that this is a great way for them to standout, fill a niche, and get tons of free advertisements in order to gain back some of their Android market share.

Motorola has effectively lost in the Android market and are on downward spiral into irrelevance (already there?), so they have to do something different.


Add to that existing grapheneos users at best only care about good enough performance and a good camera, the selling feature is security and so a lot less overhead to market such a phone. Those who want the latest features will continue to buy pixels, Samsung, and iphones. The only thing I feel is missing from the picture at a quick glance is a tablet for the few who want a secure tablet device.


"Those who want the latest features will continue to buy pixels"

My friend the GrapheneOS supported devices list is nothing but pixels, including the very latest models. It'll be good to have more supported devices.

https://grapheneos.org/faq#supported-devices


GrapheneOS currently has like half a million users and growing. And many of those users would love to not be forced to have a Google Pixel (even if those are really good phone).

The question for Motorola is: "given the cost of meeting GrapheneOS' requirements, how many more devices will we sell?". Hundreds of thousands of devices is not nothing, I guess. Plus they get free consulting from the team building the most secure phone OS out there.

I really don't understand why smaller smartphone manufacturers didn't fight before for that. Say Fairphone: I don't know about today, but a few years ago they finally got profitable by selling something like 200 thousands units a year. If they had designed a phone to be supported by GrapheneOS, that would surely have increased their sales quite a bit. Now that ship has sailed, GrapheneOS will be focused on Motorola for a few years.


Digital sovereignty. Europe is a big market and Motorola could gain traction this way


Sell devices who want to get out of the grip of US software monopolies. This is not unpopular in the rest of the world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: