Don't give your code to Microsoft if you don't want them to have your code.
This setting will make no difference to whether your code is fed into their training set. "Oops we accidentally ignored the private flag years ago and didn't realise, we are very sorry, we were trying to not do that".
It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
> There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
No, this is incorrect. If you think there's no relationship, you don't understand "release" semantics.
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (see Release-Consume ordering below).
> write with release semantic cannot be reordered with any other writes, dependent or not.
To quibble a little bit: later program-order writes CAN be reordered before release writes. But earlier program-order writes may not be reordered after release writes.
> Relaxed atomic writes can be reordered in any way.
To quibble a little bit: they can't be reordered with other operations on the same variable.
That's backwards: in C++, a release store to head_ and an acquire load of that same atomic do order the prior buffer_ write, even though the data and index live in different locations, so the consumer that sees the new head can't legally see an older value for that slot unless something else is racing on it seperately. If this is broken, the bug is elsewhere.
* we limit data shared to an atomic-writable size and have a sentinel - less mucking around with cached indexes - just spinning on (buffer_[rpos_]!=sentinel) (atomic style with proper sematics, etc..).
* buffer size is compile-time - then mod becomes compile-time (and if a power of 2 - just a bitmask) - and so we can just use a 64-bit uint to just count increments, not position. No branch to wrap the index to 0.
Also, I think there's a chunk of false sharing if the reader is 2 or 3 ahead of the writer - so performance will be best if reader and writer are cachline apart - but will slow down if they are sharing the same cacheline (and buffer_[12] and buffer_[13] very well may if the payload is small). Several solutions to this - disruptor patter or use a cycle from group theory - i.e. buffer[_wpos%9] for example (9 needs to be computed based on cache line size and size of payload).
I've seen these be able to pushed to about clockspeed/3 for uint64 payload writes on modern AMD chips on same CCD.
If you can get a megawatt into the car batteries without setting them on fire, that's game over for petrol cars. And for the other electric vehicles that haven't worked it out yet. Only reason I'm on petrol is unwillingness to wait an hour to recharge the car.
The rest of the infra is fine if that can be done. Array of batteries and/or capacitors at the supply point and draw continuously from the grid.
Most entertainingly run a diesel generator on site if that doesn't work out. Lines up well with basing them at the existing fuel stations, got the diesel supply already sorted out.
Put a bunch of solar near it when you can. Maybe sell back to grid, nice to have the extra capacity available.
All comes down to capital deployment at that point. Do the calculations on how much to charge for slow car charge vs fast charge, fallback to slow with an apology/discount when the infra is struggling etc.
Huge news. Iff the cars don't catch fire when plugged in.
I have as far as I'm aware the cheapest 800v car on sale in the US (Hyundai Ioniq 5) and in the right weather conditions a 20-80% charge is legitimately 10 minutes.
The weather conditions do unfortunately matter. Travelling during the post-Christmas blizzard last year was very much less than ideal. The battery heaters in my car could not keep up with how bitterly cold and windy it was and I had multiple 30-45 minute charging sessions because it wasn't ever warm enough to accept more than ~120kW.
I'm looking forward to traveling with it in the warm season and seeing how things compare.
Now (in China) there are also cars with sodium-ion batteries, instead of lithium-ion batteries.
Sodium-ion batteries have the disadvantage of a worse energy per weight ratio, but they also have an advantage (besides the fact that they will become cheaper when their production will be more mature): they work much better at low temperatures, not losing capacity or charging speed until minus 40 Celsius degrees.
Therefore, they may become preferable in colder climates, where they will not have the problems described by you.
> If you can get a megawatt into the car batteries without setting them on fire, that's game over for petrol cars
Chinese people are complaining about this. In highway service stops, the megawatt charger is too fast, the 20%-95% charging is done before people returns from the toilet. Realistically, the charging speed should take around 10 minutes in average for everyone.
Or there could be some price surges. You are in a really hurry pay some 1.2x price for 3 min megawatt charge, or flat price for a regular 10 min charge.
For me EVs already won when charging got down to 20 minutes.
EVs charge unattended. It takes less of my own time to leave EV plugged in parked next to a place I want to be at, than to go drive to a gas station and stand there holding a smelly nozzle.
Agreed. Right now EVs are almost strictly superior for day to day usage (only real downside is that the higher weight goes through tires faster). But for road trips, combustion vehicles blow them out of the water. If I'm taking a 12 hour road trip, no way am I going to take an EV if that means I will have to spend an extra hour or two charging it.
My wife has an EV and it's genuinely really nice. But until they get the charging experience on par with the speed of filling up a gas tank, we will always have one of our two cars be a combustion car, to give us that extra flexibility for long trips.
Or just eat an extra few minutes of charging time once or twice a year; it's simply not a big deal. Charging at home saves me so much time relative to getting gas that the occasional road trip wait is already overcompensated for. ICE/hybrid only saves you time if you can't charge at home or do lots of road trip type driving.
"Fair" or "insane" ideas on price vary a lot between people. See also "competitive" salary on job posts.
You might think $10 an hour is fair. Or you might think $1000 an hour is fair. If the developers you're trying to contact can't guess where you are on pricing, they'll probably ignore you.
Internet traffic today is estimated to be a few tens of exabytes per day. Even if you assume 100000 Starlink satellites (we're far from that), each satellite would have to handle hundreds of terabytes per day. That's tens of gigabits per second per satellite, assuming traffic is split evenly among them (will never happen in real situations).
Starlink V3 can pump out some seriously impressive speeds and handle thousands of clients. Starlink is both a great leap forward in rocketry and radio technology.
I do still think funny how we are going back to the pre war technology tree for a re-visit
That's not even sufficient to handle the needs of a single large city. The limitation is that even with the much larger constellation they hope to deploy there won't be enough satellites visible at once from any given large metro area.
So gain access to a machine that can ask microsoft intune to eviscerate the company, ask it to do so, done. Bit of a shame all the machines had that installed really. Reminds me of crowdstrike.
My 95% bet is that the attacker just gained access to an account with suitable privileges and then went on to use existing automation. The fact that it’s intune is largely irrelevant - I’m not aware of any safeguards that any provider would implemen.
So the options here are MDM or no MDM and that’s a hard choice. No MDM means that you have to trust all people to get things as basic as FDE or a sane password policy right. No option to wipe or lock lost devices. No option to unlock devices where people forgot their password. Using an MDM means having a privileged attack vector into all machines.
How does that look exactly? Someone has to be able to use MDM to manage devices or there’s no point in having it. This scenario is firmly in rubber hose/crescent wrench cryptanalysis territory. Can updates have delays with approval gates built in? Does MDM need a break glass capability?
Do not use global admin or admin account as daily driver for one. Dont save it in browser etc either.
Limit roles, even within the application, here Intune.
Office 365 also has conditional access and many policy leavers to tweak, many cases of people locking themselves OUT of 365. So the gates work but you need to configure them.
For Stryker specifically? We don't and probably won't know details.
For companies in general? Background checks, security clearance etc are done if the company determines this necessary and are willing to pay for the process and higher salary.
I’m asking if it’s possible to secure the MDM process in a way that Iranian operatives can’t simply torture an administrator into pushing the big red MDM button.
Well, all the machines in the current outfit are Linux as far as I know. Services are self hosted. Seems to be fine, teams et al run adequately in a browser for talking to people on other stacks.
Previous place had a corporate controlled windows laptop that made a very poor thin client for accessing dev machines. One before that had a somewhat centrally managed macbook that made a very poor thin client for accessing dev machines.
You don't have to soul bond to Microsoft to get things done.
I don't see how Linux would prevent anything if company wants similar controls on their machines. Like tracking update status, forcing updates when needed, potentially wiping entire device when stolen and so on. Fault really is not the OS but the control corporate wants over their devices. And it does make some sense.
Indeed. You'd expect a corporate IT system to be able to ssh as root into all their devices. And the cloud is even worse: if you get hold of the right IAM role, you can simply delete everything! That does usually get locked behind proper 2FA, but it's not impossible to phish even experienced admins once in a while.
All the Linux kernel development work is organized around a mailing list, and some private IRC chats for the core people. It's the technology of the nineties but it works for them.
A lot of corporate stuff seems to be much worse than even a random vibe coded web app. I have to book holiday through something called "HR Connect", watching pages load laboriously and redirect every login through several very long URLs. Slowly.
Yes, the Linux kernel people can be trusted to manage their own machines. Random corp employees cannot. Also corp machines are corp property, not the employees own. If you have 1000 or 10,000 machines you need to manage them. Full stop.
Yes, many corporate websites are bad. Like ERP or HR systems. None of that has to do with device management, RMMs/MDMs or Intune.
Microsoft keeps disappointing and chief technology officers keep paying them. Wasn’t Elon Musk supposed to prove you could vibe code their entire product line? What happened to all that?
An alternative is people install the software they choose to on the machines they're using. Optionally write a list of suggested programs down somewhere.
In that world, there is no central IT team pushing changes to machines and arguing with developers about whether they really need to be able to run a debugger.
I don't know how to keep windows machines alive. It's probably harder.
- Ensure the machines are up-to-date and users are not just indefinitely postponing OS updates?
- Same as above but with programs/software
- How do you ensure correct settings configuration in terms of security? Say default browser, extensions, program access etc?
- Re-image or reinstall the OS when there are issues or PC handover to another employee? Manually with a USB stick?
This kind of control exists and is needed for Linux and MacOS too. RMM is not a Windows only thing...
The critics here see Intune but what if they used another RMM and they compromised another cloud RMM account? Same issue.
Also, here there is no "arguing". They order the software from our portal and it gets pushed into Company Portal via Intune...
Write down a list you say... idk what to say. You have only worked for small startups I gather? Nothing wrong with that but please recognize that these types of limits and programs are not deployed for fun or to ruin your day.
I hear zero-trust is a trendy buzzword at the moment, so let's apply the basic idea here: having a hard shell and a soft and chewy center is not a security posture that works, in practice. You need to harden at every level. RMM uber-admin credentials are the ultimate soft center: you compromise those, you can kill the entire IT infrastructure. The only alternative is to distribute access: have multiple smaller IT teams that adminster small parts of the system, with more 'central' roles providing services but not having full control of most machines. It's not a fun option, but it might also work a lot better if each team can actually adjust policies for the environment they're working in as opposed to trying to have one completely unified policy for an entire multi-thousand employee company. And, for critical systems, I would seriously consider the wisdom of having a remote 'wipe and reformat' button at all.
At a bare minimum, your backup systems should have a completely disjoint set of credentials to your main systems, stored and controlled differently, ideally by a seperate team, if you have the resources.
(And the arguing becomes a problem when IT ceases to consider their job to be solving problems for users within some constraints, and just starts to consider their job to be enforcing those constraints. This also mixes badly with incompetence, which tends to turn everything into a tedious tick-box exercise that neither improves security nor solves user's problems. It's not a good time to have an IT department that can't resist any new security checkbox a vendor offers but can't figure out how to work any of their fancy tools to make life even the slightest bit smoother for their users)
Everyone doing it doesn't make it a good idea. The big tech companies and governments are I think a little more paranoid about rouge admins, so they do at least try to limit the blast radius of any given credential, but almost no-one else has that level of maturity, which creates this pretty big chasm in the resiliance of IT organisations as you go from small to large.
(There's also a certain irony about IT complaining that a change to improve security would mean they can't do their job as easily)
I think you do not understand what a massive undertaking even securing a tenant in GSuite or Office 365 can be. Plus networking. Plus end user computing.
On top of this you want companies and governments to make their own tools?
You have a vision... of something zero trust. Now make it and implement it. Oh, not so easy?
S3 buckets used to be open by default. Office 365 had MFA as optional for a looooong time. So things are improving.
I, for one, don't really want employees to install video games, porn cam clients, torrenting apps, shady vpn clients, crypto miners, remote access tools, dns "optimizers" and more generally viruses on their work computers.
On HN, if you have a valid point but get unnecessarily aggressive about it, people will downvote you for attitude. This mostly keeps the forum under control.
I am sorry and I get carried away sometimes but it is frustrating seeing comments from cowboy devs saying to just give everyone admin, have an excel sheet of software and have people manage their own PC and to get rid of IT just because as here they got phished or breached.
That works for a 5 person company but not a 1000 person company. Or a 10 person company with 1000 machines.
I used to work in test automation for a huge company with terribly annoying IT. I can tell you for a fact that our entire department had well-developed workarounds for the most annoying policies. We even had a few intune 0-days that we literally kept to ourselves to be able to do our jobs properly.
Because in the end, it’s not IT on the line for their odious policies causing late delivery, it was us.
What was so annoying? Having to reboot for Windows updates/programs and MS Defender running?
Also, if the company is certified in some way there are audits for these things, you understand? Such as updates, backups, security, PAM, antivirus etc :)
Subvert these controls intentionally, especially security ones = bye bye. Logs don't lie. We see you.
We never got caught or fired. I won’t detail the 0-days we used because I’m pretty sure the team is still using them, but I can assure you that the logs DID lie.
reply