> There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs."
Safe Rust does do this. Dropping into unsafe Rust is the prerogative of the programmer who wants to take on the burden of preventing bugs themselves. Part of the technique of Rust programming is minimising the unsafe part so memory errors are eliminated as much as possible.
If the kernel could be written in 100% safe Rust, then any memory error would be a compiler bug.
> it is not "Safe Rust" which is competing with C it is "Rust".
It is intended that Safe Rust be the main competitor to C. You are not meant to write your whole program in unsafe Rust using raw pointers - that would indicate a significant failure of Rust’s expressive power.
Its true that many Rust programs involve some element of unsafe Rust, but that unsafety is meant to be contained and abstracted, not pervasive throughout the program. That’s a significant difference from how C’s unsafety works.
But there are more than 2000 uses of "unsafe" even in the tiny amount of Rust use in the Linux kernel. And you would need to compare to C code where an equally amount of effort was done to develop safe abstractions. So essentially this is part of the fallacy Rust marketing exploits: comparing an idealized "Safe Rust" scenario compared to real-word resource-constrained usage of C by overworked maintainers.
The C code comparison exists because people have written DRM drivers in Rust that were of exceedengly high quality and safety compared to the C equivalents.
Even if you somehow manage to ignore the very obvious theoretical argument why it works, the amount of quantitative evidence at this point is staggering: Rust, including unsafe warts and all, substantially improve the ability of any competent team to deliver working software. By a huge margin.
This is the programming equivalent of vaccine denialism.
So kernel devs claiming Rust works isn't good enough? CloudFlare? Mozilla?
Your're raising the bar to a place where no software will be good enough for you.
What I'm most curious about, and what the docs are light on detail about: does this mean Thunderbird complies with remote deletion requests (which IIRC, the Exchange protocol suppports)? I have the impression that Microsoft makes this a requirement for Exchange implementations, which is why third-party devices and apps like Apple's Mail cooperate with those requests.
Not sure how Mozilla went about the implementation, but I do agree it would be a concern to verify before using.
You can perform the following Exchange ActiveSync tasks:
Enable and disable Exchange ActiveSync for users
Set policies such as minimum password length, device locking, and maximum failed password attempts
Initiate a remote wipe to clear all data from a lost or stolen mobile phone
Run a variety of reports for viewing or exporting into a variety of formats
Control which types of mobile devices can synchronize with your organization through device access rules
Some clients perform some of those operations in a sandbox. Eg. Nine for Android let's you choose when you set up an account whether a remote wipe command should just wipe that account's local mailbox, or your whole device.
ActiveSync will forever be reserved for the technology I used to sync email and calendar on my HP Jornada 430 running Windows CE - just like James Bond did!
No, Exchange ActiveSync (as the other commenter correctly identified it) really allows an admin to wipe your device - ostensibly of mail, but often of all other data as well.[0]
If your Outlook server disables IMAP & POP3, then the ActiveSync protocol is AFAIK the only way to get in-app emails on your phone. Admins do this so that they can forcibly wipe the device if they "need" to.
> Where's the law preventing stores from imposing an accounting fee for multi-item purchases, conveniently totaling a few cents?
Where’s the law preventing someone from doing this right now? I don’t think this cynicism is justified.
Similarly, if places are willing to price stuff at $1.03 for the few extra cents they’ll collect some of the time, then they can just raise prices on 99c items right now to $1 to collect the extra cent, which they don’t do because such prices have a psychological effect on the consumer that outweighs the small gain.
> Where’s the law preventing someone from doing this right now? I don’t think this cynicism is justified.
You don't think businesses take advantage of situations for more profit?
Take this year's tariffs as an example. As you may've heard, UPS is charging customs brokerage fees of dozens or hundreds of dollars on top of the actual tariff payment; identical shipments sent via FedEx or DHL are only charged a few dollars for the service of customs brokerage, so we know UPS's actual costs for providing that service aren't that high. They saw a situation where consumers would be confused about prices and took advantage of it to make a lot more money by simply charging a lot more than they need to.
"But where's the law saying they couldn't have just raised their prices by hundreds of dollars without tariffs? Where's the law?!" There wasn't one, they could've raised their prices for international shipments before the tariffs happened. But consumers would have noticed a lot more and accepted it a lot less. They took advantage of the situation because the situation allowed them to get away with it.
> Similarly, if places are willing to price stuff at $1.03 for the few extra cents they’ll collect some of the time, then they can just raise prices on 99c items right now to $1 to collect the extra cent, which they don’t do because such prices have a psychological effect on the consumer that outweighs the small gain.
I'm not sure what you're arguing here. You admitted the $0.99 number has a psychological effect that outweighs the $0.01 gain of charging the extra cent. That would be the reason they don't do that. It's not super relevant to the discussion of whether rounding can/will be gamed.
> UPS is charging customs brokerage fees of dozens or hundreds of dollars on top of the actual tariff payment
To reinforce this point... UPS just does this all the time. I had to have a number of personal effects[1] shipped up from the US to Canada that I requested self-declaration forms for them and never received them - UPS decided to brokerage the shipment themselves. We then spent the next three months fighting a six hundred dollar charge[2] that should have never existed.
UPS is going to defraud customers on brokerage fees regardless of the scenario - it's just what UPS does. You've got bigger problems to worry about - the impact of dropping the penny will be unnoticeable in the sea of general corruption and fraud.
1. Items that you own in one country and are shipping to Canada for personal possession are exempt from most normal tariffs.
2. To really add icing to outrage - this was more than double the original shipping price and, considering we delivered an itemization with the shipment for customs UPS could calculate their BS fee upfront and show the actual cost to the customer but they don't because the US doesn't force them to.
>You don't think businesses take advantage of situations for more profit?
That's not the point. Businesses are obviously happy to raise prices under the confusion of other changes, but I find it very hard to believe "accounting fees" are a plausible way to do so. People know that the register machine can do the calculations easily - it already does so. And there is a good reason for businesses not to introduce such fees, because they are directly visible to the consumer who is going to complain and shop elsewhere.
The UPS example is apples to oranges. Tariffs are poorly understood, and consumers rarely shop around for shipping - they tend to take the service given by the merchant. The agency people will show on 2 random cents on every shop is way higher.
>It's not super relevant to the discussion of whether rounding can/will be gamed.
It's very relevant. How are consumers going to react to a price like $1.03? Especially since that's almost certainly something that would previously have been priced at $1.
> I mean, feeling sand compress in subtle ways and being able to map that mentally to an object that might be hidden in the sand seems like literally touch plus normal world modelling / reasoning
That seems like a very strong claim against the paper’s results. What makes you think that the study participants located the cube with reasoning, rather than unthinking sense?
I think we can be too quick to write things off as somehow coming from conscious thought when they bypass that part of our minds entirely. I don’t form sentences with a rational use of grammar. I don’t determine how heavy something is by reasoning about its weight before I pick it up. There is something much more interesting happening cognitively in these cases that we shouldn’t dismiss.
This is like wondering if we calculate parabolas consciously before catching a tossed ball. We had to learn its behavior without knowing the physics, but it becomes unconscious soon. If tossed balls behaved like they do in cartoons, we'd learn to predict them even if they violated the laws of physics.
I still wonder how we practiced finding things by distinguishing the fluidity of a medium around them. Maybe playing in water?
Probably because it’s unclear what this pedantry about synecdoche contributes to the discussion. Many people (including journalists at the state broadcaster) happily refer to the whole tower as Big Ben, so that is functionally one of its names.
Is the fact that the name originates from a bell, and that the official name for the tower is different, interesting? Maybe. Is it worth “correcting”? No, for the same reason it’s not worth policing people’s use of “Google” to mean “Alphabet”.
While I'd typically agree that pedantry is normally rather boring, the pedantry in this case is actually interesting in that the individual properties of the object have their own name and are addressable uniquely. It sounds like they are public as well.
I also reject your premise this is anything similar to referring to Google/Alphabet as interchangeable.
>I've been drinking raw milk probably since I was 3 year old, like most kids in my relatively underdeveloped country before I moved to the US.
Why do you think this is a strong enough reason to allow a dangerous product that used to kill people onto the market? This anecdote isn't a strong empirical justification for the safety of raw milk, just like saying that you often don't crash your car isn't a good argument for the unnecessity of seatbelts. Food poisoning incidents are not that common, even in unsanitary conditions - pasteurisation is about making it so that kids don't get unlucky.
The repo is sparse on the details unless you go digging, which perhaps makes sense if this is just meant as the artifact for the mentioned paper.
Unless I’m wrong, this is mainly an API for trying to get an LLM to generate a Z3 program which “logically” represents a real query, including known facts, inference rules, and goals. The “oversight” this introduces is in the ability to literally read the logical statement being evaluated to an answer, and running the solver to see if it holds or not.
The natural source of doubt is: who’s going to read a bunch of SMT rules manually and be able to accurately double-check them against real-world understanding? Who double checks the constants? What stops the LLM from accidentally (or deliberately, for achieving the goal) adding facts or rules that are unsound (both logically and from a real-world perspective)?
The paper reports a *51%* false positive rate on a logic benchmark! That’s shockingly high, and suggests the LLM is either bad at logical models or keeps creating unsoundnesses. Sadly, the evaluation is a bit thin on the ground about how this stacks up, and what causes it to fall short.
Yep. The paper was written last year with GPT-4o. Things have become a lot better since then with newer models.
E.g. https://arxiv.org/pdf/2505.20047 Tab 1, we compare the performance on text-only vs SMT-only. o3-mini does pretty well at mirroring its text reasoning in its SMT, vs Gemini Flash 2.0.
Illustration of this can be seen in Fig 14, 15 on Page 29.
In commercially available products like AWS Automated Reasoning Checks, you build a model from your domain (e.g. from a PDF policy document), cross verify it for correctness, and during answer generation, you only cross check whether your Q/A pairs from the LLM comply with the policy using a solver with guarantees.
This means that they can give you a 99%+ soundness guarantee, which basically means that if the service says the Q/A pair is valid or guaranteed w.r.t the policy, it is right more than 99% of the time.
Re: "99% of the time" ... this is an ambiguous sample space. Soundness of results clearly depends on the questions being asked. For what set of questions does the 99% guarantee hold?
It is likely too late for many existing contracts with packages built-in, which probably also overlap with the longest-working (and thus most expensive) engineers.
Isn't the consensus from climate scientists that emission reductions are totally necessary, and there is no solution which is solely based on capture of greenhouse gases or cooling technologies? Even if reducing emissions is not enough, I thought it was clear that it needs to be done - and the economic impact is a necessary evil, since in reality we are just seeing the reversal of economic benefits obtained at the cost of planetary temperatures.
I haven't seen an analysis of stratospheric aerosol injection that suggested it couldn't solve the problem. We know it works because it happens naturally via volcanoes, and if we do it ourselves we can do it almost entirely without the other bad effects volcanoes cause. The opposition I've seen to it has been on moral or ethical grounds, or misunderstandings.
Safe Rust does do this. Dropping into unsafe Rust is the prerogative of the programmer who wants to take on the burden of preventing bugs themselves. Part of the technique of Rust programming is minimising the unsafe part so memory errors are eliminated as much as possible.
If the kernel could be written in 100% safe Rust, then any memory error would be a compiler bug.