Hacker Newsnew | past | comments | ask | show | jobs | submit | asd4's commentslogin

The security researcher in the article was concerned about accidently confirming the prompt on his watch.

I don't think its a matter of being "smart enough". Human error can easily creep in when dismissing 10's or 100's of prompts.


The prompt UX should step into a special "bombed" mode when a frequency threshold is crossed, at which point accepting a prompt has fat-finger protection such as double confirmation steps, and declining all (or perhaps all that share a commonality, like same initiating IP address) becomes possible.


Or you know, not allow this kind of brute forcing at all?


They seem to gate ECC support behind Xeon for higher end processors. You see ECC memory in a lot of workstation class machines.


I've had good luck with https://www.rockauto.com/ though I haven't bought anything big through them.

The website is fun and nostalgic to me.


"What they mean by IO bound is actually that their system doesn’t use enough work to saturate a single core when written in Rust: if that’s the case, of course write a single threaded system."

Many of the applications I write are like this, a daemon sitting in the background reacting to events. Making them single threaded means I can get rid of all the Arc and Mutex overhead (which is mostly syntactic at that point, but makes debugging and maintenance easier). Being able to do this is one of the things I love about Rust: only pay for what you need.

The article that this one is responding to calls out tokio and other async libraries for making it harder to get back to a simple single threaded architecture. Sure there is some hyperbole but I generally agree with the criticism.

Making everything more complex by default because its better for high throughput applications seems to be opposite of Rust's ideals.


I’ve written services like this, and I would never have called them IO bound. They’re not throughput-bound at all. They mostly sit idle, then they do work and try to get it done quickly to minimize use of system resources. Unless they sometimes get huge bursts of work and something else cares quite a lot about latency during those bursts, using more than one thread adds complexity and overhead for no gain.


A lot of people on the internet are confused about what "IO bound" means, and use it in this incorrect way.


In an era of 10Gb NICs in every server very few things are really IO bound.


The NIC does not really have a lot to do with being IO bound.

IO bound means you spend most of your time waiting on an IO operation to complete. Usually writes are bound by the hardware (how fast your NIC is, how fast your storage is, ...), but reads are bounds by the hardware, but mostly by the "thing" that sends the data. So it's great you have a 10Gbps NIC, but if your database takes 10ms to run your query, you'll still be sitting for 10ms on your arse to read 1KB of data.


In this context, we're talking about things for which the throughput is IO-bound. You're talking about the latency of an individual request.

Throughput being IO-bound is indeed about the hardware, and the truth is that at the high end it's increasingly uncommon for things to be IO-bound, because our NICs and disks continue to improve while our CPU cycles have stagnated.


In purely practical terms the old system interfaces are sufficiently problematic that for any workload with necessarily smaller buffers than tens of kb, most implementations will get stuck being syscall bound first. Spectre really didn’t help here either.


I think this is where we have to really move towards the io_uring/FlexSC approach.


The speed of your NIC doesn't matter when you are waiting for an INSERT on a DB with a bad schema. Heck, your DB could be on localhost and you are not even hitting the NIC card. Still the same.


Although NVMe/SSD drives have changed things a lot, any media creation software is still IO bound in the sense that:

a. you cannot plan to read data from disk on demand, because it will take too long (still!), and it will almost certainly block

b. you cannot plan to write data to disk on demand, because it will take too long (still!) and it will almost certainly block

c. the bandwidth is still a limit on the amount of data that can be handled. It is much higher than it was with spinners, but there is still a limit.


There are plenty of applications that do not run on servers. Lots of IO bound stuff in mobile or desktop apps - waiting for network responses, reading data files on startup, etc.


> In an era of 10Gb NICs in every server very few things are really IO bound.

for my data crunching project, one core processes about 500MB/s = 4Gb/s, and I have 64 cores..


10gb nics and their respective connections are quite expensive. Not many servers have these at all.


As a person with a sysadmin + HPc background having built several clusters recently, this is not true(anymore). 10G NICs are almost as common as Gigabit NICs(both in availability and cost). To give you an idea, we commonly use 10G NICs on all compute nodes, and they connect to a 10G top of the rack switch which connects to services like file servers via 100G connections. The 10G connections are all 10GBase-T simple Ethernet connections. The 100G connections are DACs that are more expensive but not prohibitively so.

What cloud providers give you for VMs is not the norm in the datacenters anymore.


Everything is relative. If you are a cloud provider it’s one thing. I’m speaking from the perspective of the small medium business that rents these physical or virtual servers.


my $700 Mac Mini has a 10gb NIC. 2.5gb and 5gb NICs are very common on modern PC motherboards. Modern servers from Dell and HP are shipping with 25gb or even 100gb NICs.


The cost of 10g is much higher than a single computer. The entire networking stack must be upgraded to 10g. At the very least the Internet device, and possibly the Internet connection as well. It will be cheaper in the cloud than on site.


Well, it depends on what your use case for "10g" is. If all you care about is fast file transfers between your PC and your NAS, you can get a small 5-8 port 10gb switch for under $300 that will easily handle line-rate traffic (at least for large packet sizes)

If you want 10g line-rate bandwidth between hundreds or thousands of servers? Yeah, I used to help build those fabrics at Google. It's not cheap or easy.

10g to the internet is more about aggregate bandwidth for a bunch of clients than throughput to any single client. Except for very specialized use cases you're going to have a hard time pushing anywhere close to 10g over the internet with a single client.


10Gb ethernet is 20+ year old tech and and used these days in applications that don't have high bandwidth demands. 100 Gb (and 40 Gb for mid range) NICs came around 2014. People were building affordable home 40 Gb setups in 2019 or so[1]. But I can believe you that the low-end makes up a lot of the volume in the server market.

[1] https://forums.servethehome.com/index.php?threads/cheap-40gb...


In my experience, 40gb and 100gb are still mostly used for interconnects (switch/switch links, peering connections, etc.). Mostly due to the cost of NICs and optics. 25gb or Nx10gb seems to be the sweet spot for server/ToR uplinks, both for cost, but also because it's non-trivial to push even a 10gb NIC to line rate (which is ultimately what this entire thread is about).

There's some interesting reading in the Maglev paper from Google about the work they did to push 10gb line rate on commodity Linux hardware.


I guess it'll also depend a lot on what size of server you have. You'd pick a different NIC for a 384-vCPU EPYC box running a zillion VMs in a on-prem server room than a small business $500 1u colo rack web server.

The 2016 Maglev paper was an interesting read, but note that the 10G line rate was with tiny packets and without stuff like TCP send offload (because it's a software router that handles each packet on CPU). Generally if you browe around there isn't issue with saturating a 100G nic when using multiple concurrent TCP connections.


Yes exactly. Not everything seeking concurrency is a web server. In an OS, every single system service must concurrently serve IPC requests, but the vast majority of them do so single threaded to reduce overall CPU consumption. Making dozens of services thread per core on a four core device would be a waste of CPU and RAM.


> Not everything seeking concurrency is a web server.

Web servers should be overwhelmingly synchronous.

They are the one easiest kind of application to just launch a lot more. Even on different machines. There are some limits on how many you can achieve but they aren't anything near low. (And when you finally reach them, you are much better rearchitecting your system than squeezing a marginal improvement due with asynchronous code.)

There's a lot to gain from non-blocking IO, so you can serve lots and lots of idle clients. But not much from asynchronous code. Honestly, I feel like the world has gone crazy.


tokio supports a single threaded executor when you really need it, and its not even hard. It's called a LocalSet in tokio's API:

https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html...


This is true but the rest of the ecosystem is not built for it.

If you try to use axum in this way you'd still need to use send and sync all over the place.


I was going to comment on the same quote.

The problem is that one may still want concurrency even when a single thread on a single CPU is enough.


Instead of Arc and Mutex you'd be using Rc and RefCell. Wouldn't it be just as complex and verbose code-wise?

I understand that it is less efficient but in the case you describe wouldn't paying for a few extra atomics be negligible anyway?


I've found that practically I'm more likely to simply use Box, Vec, and just regular data on the stack rather than Rc and RefCell when I esque Arc and Mutex by using a single context. The data modeling is different enough that you generally don't have to share multiple references to the same data in the first place. That's where the real efficiencies come to play.


I wasn't familiar with the underlying effect. This paper seems relevant and is very readable.

https://www.sciencedirect.com/science/article/abs/pii/S23524...


- absorption and radiation are correlated (a black surface would radiate more than a white surface when heated to 6000K (assuming they survive the temperature unchanged)) - a solid polished surface reflects better than a powder surface (thats imho a flaw in the article) - basically you want a surface that is white in solar and black in atmospheric


One of my high school professors touched on this when talking about heatsink design and I thought I remembered it wrong, glad to see it come up here.


Exactly. The trick here is that you're exploiting overlaps in emission and absorption spectra in the atmosphere.


I like https://www.borgbackup.org/ and you can get a relatively cheap storage plan on rsync.net just for Borg (https://www.rsync.net/products/borg.html)

Retrieving backups is a little manual. Perhaps someone has created a nice GUI for it.


There are at least two GUI's for borgbackup:

* Vorta (https://github.com/borgbase/vorta)

* Pika Backup for Gnome (https://apps.gnome.org/app/org.gnome.World.PikaBackup/)

Vorta is multiplatform and more reliable by my experience.


I just mount the backup repository and then copy what is needed.


It's a single axis version of a system my group at NASA developed:

https://roundupreads.jsc.nasa.gov/pages.ashx/787/New%20weara...

The internal application was to improve mobility in space suits. We had a partnership with some medical researchers looking to help patients with otherwise limited mobility.

Shoulders are difficult. The human body has a lot of amazing degrees of freedom. One of the biggest challenges was efficient and effective transfer of the assist forces to the body.


A lot of the design of the human body sacrifices strength for mobility and range of motion. Most muscles have really unfortunate mechanical leverage, to the degree it's quite impressive we're so strong as we are.

Adding to that, without completely butchering mobility is probably no easy task.


What's remarkable to me about this is how specialized our shoulder-arm linkage is for overhand throwing.

A lot of the typical difference between the male and female upper body comes down to this specialization. There is some evidence of facial adjustment to punching, but we could hit much harder with a more chimp-like shoulder, this doesn't require knuckle walking: but we wouldn't throw a spear as far nor as accurately.


I'm also impressed everytime I lift weights and think about how close the muscles attach to the joint providing very little leverage. On a side note - this is why chimps f.x. are so strong - their muscles attach further from the joints and by that provide more leverage.


> chimps f.x.

Just a small note, that abbreviation seems extremely rare to me. You might have better readability by saying "e.g." which means the same thing, or just writing it out. It took me a while to figure out whether you were referring some body part belonging to chimps or something


Wow even after reading the above comment I still can't confidently piece together what "chimps f.x." means. Is it "chimps for example"?


seriously? I speak internally while writing and "for example" feels more fluent than "exempli gratia". that's why I prefer fx. I'm not a native speaker, though


Another native speaker chiming in. This is my first time encountering f.x. and it took me quite a while to figure out (essentially guess) what it meant. Most people I know and situations I've encountered use e.g. (possibly without even knowing what it means). In common usage e.g. is "for example" just like etc. means "and additional things" or i.e. means "that is".


I wish I could write or speak another language anywhere close to as well as you do English.

You're right! I certainly don't think of the words "exempli gratia" -- I literally think the letters "e g" as a mental shorthand for "for example". I often find myself writing "e.g." first, and then expanding it to "for example" when re-reading what I wrote.

Sorry. English is weird. "f.x." seems like it should be a preferable abbreviation for "for example", but it just isn't idiomatic. I figured out that was what you meant when reading it, but it definitely stood out in much the same level of wrongness as seeing code that isn't formatted correctly or that uses a non-idiomatic way of doing things (list comprehensions in Python).


I always thought e.g. meant "example given"; there's also "i.e." which I presumed meant "in example".

anyway, "for example" takes two seconds to type (if that), if abbreviations cause confusion (in general, in any situation, especially professionally), avoid them.

Anyway I'll brb, I got an I&A meeting for our SAFe procedure, gotta get our CI's and DoD in order and make sure we execute LCM properly. No I don't know what any of these abbreviations mean, but this is the situation we find ourselves in, lmao


I'd suggest sticking to "e. g." as well. I'm also not a native speaker and have never seen the abbreviation "f. x." before so I couldn't figure out its meaning.


If you know what e.g. stands for, you don't need to expand it. Native speakers just say "e.g.", as in, "ee gee". If you don't know what "fx" stands for then how would you expand it? It's not a common abbreviation at all


FWIW the abbreviations listed in Wiktionary are f.e. and f.ex.


chimps: f(x) = y^2


Depends on where in the world you hail from originally. I have also seek "fks" to mean the same thing. More common with non-native English speakers.


It's probably a feature of non-western English varieties. I've never seen "fks" used to mean "for example" and would never imagine that that's what it stands for.


Yep, and they vary somewhat from person to person. I think it's part of why some people are apparently stronger than they look: longer tendons give more leverage with less muscle mass.


Is there any advantage to having muscle attachments closer to the joint?


Range of motion. You can pick up a stick, hold it above your head, and throw it with significant power and control. A chimp can't do that.


I was thinking more within human variances. For example, I've seen athletes with high calf muscles who can jump really high. If there an advantage to those whose calf muscle stretch to the bottom of the leg? Do they have some increased range of motion that this helps with?


That's probably the secret of the shoulder, it's strong(-ish) in some directions and very weak in others.


And why people dislocate their shoulders so frequently, just doing like normal things. Someone I knew dislocated his shoulder swimming freestyle. Just happened.


I think yours is multiple axis version of this one since this the topic ...


Most hardware engineering is done ahead of time without a full production style environment. This is because the cost of iterating is much too high. You can't build a bridge every time you want to try a new cable or bolt. It forces designers to make models and assumptions about their systems and, inherently, puts downward pressure on complexity. It also forces them to truly understand the principles behind what they are building.

The fact that Perseverance and other Mars rovers have been successful is amazing and took an incredible amount of work to accomplish. These are complicated systems that were vetted using models and simulations without ever having been run "in production". This comes at a high cost.

Critical software is never tested in production or run "in system" before it is deployed. Airplanes, banks, medical systems all require extensive validation through testing on models and simulations. You can't test your changes for the first time on a live aircraft or living tissue. Costs reflect that.

The truth is, a lot of software is not critical. You can get away with hacking / trial and error type development and never fully understand the system you are helping build. Frankly there is a lot of money to be made providing brand new services that are unreliable or quirky or ephemeral because those services never existed before.

My point is that how you test and validate your software product depends on your application. Sometimes the costs don't make sense to "run everything" and sometimes its physically impossible. I agree that you should always advocate for the highest fidelity testing your business case can afford, but be prepared to settle for less than everything and rely on your engineering skills to buy down risks in the gaps.


On the list of features: "Dedicated Audio Processing DSP and sub-system".

The hardware might be there to do some good audio processing, but how it integrates with the OS is something I'm not experienced with.


I did not know this about connection limits.

The multi-domain assets really bug me when I'm enabling domains one by one in NoScript.

Here is an example of domains used on Amazon's website:

  amazon.com
  www.amazon.com
  amazon-adsystem.com
  associates-amazon.com
  media-amazon.com
  ssl-images-amazon.com
On its own "associates-amazon.com" sounds sketchy, but I suppose you assume the HTTPS page that you loaded from amazon.com knows what its doing.


The HTTP 1.1 RFC says 2 per domain, but it’s more like 6 for most browsers.

https://docs.pushtechnology.com/cloud/latest/manual/html/des...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: