Hi, I'm the guy who wrote the original VCBlog post. I would assert that we've been consistent in our messaging: there is not an automatic fix available for Spectre. The /Qspectre switch offers help in mitigation. It doesn't offer, nor does it claim to offer, complete protection.
We--as an industry--are learning as we go with Spectre. There's a lot of data that went into the decision to release this switch with its current implementation. And this isn't the last iteration: note in Paul Kocher's writeup that we've asked for feedback as to whether anyone would use a switch that was sound (for known variants) but incurred very large performance regressions.
As evidence that this is an industry-wide issue, I ask that you reread Paul Kocher's post. Also, see Chandler Carruth's tweet this morning about this topic: https://twitter.com/chandlerc1024/status/963995521705627648. Chandler is the guy driving Spectre in LLVM.
Lastly, understand that Microsoft has a lot of customers who rely on our technologies. For their benefit it's often better to say less than to say more, especially when talking about security vulnerabilities.
> Hi, I'm the guy who wrote the original VCBlog post.
Hi from one compiler engineer to another!
> Chandler is the guy driving Spectre in LLVM.
I actually sit about 25ft away from him; we talked about his tweet before lunch today. :)
> there is not an automatic fix available for Spectre. The /Qspectre switch offers help in mitigation. It doesn't offer, nor does it claim to offer, complete protection.
When I read this sentence and the footnote in the VCBlog post my takeaway is that /Qspectre offers incomplete protection that is nonetheless useful for a nontrivially broad class of applications. That is, I understand "incomplete mitigation" to be a stronger statement than "there exists a program in which the spectre attack is mitigated".
But when I read Paul's post, what I understand is that the level of protection offered is not useful for applications that do not look extremely similar to the original Spectre PoC.
I wonder if you think I'm being unfair in my reading of either of these documents?
> understand that Microsoft has a lot of customers who rely on our technologies. For their benefit it's often better to say less than to say more, especially when talking about security vulnerabilities.
I'm also genuinely curious how telling customers less about a security fix might be better for them than telling them more.
"Managed memory is free. Not as in free beer, as in free puppy."
Dev manager of Exchange used that line in a talk. Never were more insightful words spoken. Devs will move from C++ where they obsess about every allocation to .NET and they'll totally forget that allocation is expensive no matter what the platform or runtime.
>they'll totally forget that allocation is expensive no matter what the platform or runtime.
Well, it's easier to do in a managed language. When you literally don't have to agonize or obsess over every allocation because you aren't responsible for cleaning it up (unmanaged held resources withstanding), you tend not to do so.
P.S.: You're always free to drop down into C or C++ if you want to get some speed, but of course you need to clean up after yourself there. A friend of mine wrote a good guide on doing so, if anyone cares https://github.com/TheBlackCentipede/PlatformInvocationIndep...
>You're always free to drop down into C or C++ if you want to get some speed
Wouldn't C# with structs and pointers do the job in many cases? I've been able to get 50-fold increases in speed through heavy optimizations, without switching to another language. Using C or C++ solely for a "speed boost" over C# is not only unnecessary, but it creates more problems than it solves. If you don't know how to optimize within C# (as a C# developer), how are you going to succeed in writing efficient C++ code?
Once you learn the nuances and limitations of making optimizations in C#, then you should start looking into how and when other languages such as C can wisely be used. To name an example, C makes it easier to micromanage assembly instructions (can be done in C# too, but not in a very practical way, and yes I mean assembly and not IL). C also contains more syntax and features which are suitable for bitwise micromanagement, whereas with C# it can be more awkward.
Yes they would, and the C# 7 improvements taken from Midori experience make it much better.
I think in general it is a culture problem.
Those of us that embraced managed languages, including for systems programming (Oberon, D, ...), know that we can be productive 99% of the time and just have to care how to do speed boost tricks on that 1% using profiler and low level language tricks.
In C and C++ communities there is a sub-culture of thinking too much ahead of time how much each line of code costs, thus speeding too much time with design decisions that actually have zero value in the context of the application being delivered.
The problem is not taking those decisions, rather taking them without validating if they are right with a profiler, or regard to the goals that have to be met for the application.
Beyond which any low level fine tuning, while fun, is needless engineering.
Midori was so beautiful. I think it would have succeeded as a .Net runtime replacement with picoprocesses. it frustrates me that we didn't open-source it.
As believer in GC enabled system programming languages, I do feel it was indeed a missed opportunity, specially to change the mind of those that think C and C++ are the only way to write OSes.
Can you please point to any resources that talk about heavy optimization options in c#. That 50-fold increase you talk about is very interesting. I would like to learn more.
Great list. It's important to understand when to use each one of these. Identify your bottleneck, through the use of profilers. Execution time is largely based on memory bus blocking I/O and not the CPU calculations, so if you start with writing SIMD, you're not going to get anywhere.
Accessing data on the stack instead of the heap is the #1 saver of execution time, in my experience. But your bottlenecks might be different. Locally scoped value-type variables are generally on the stack. Object-scoped and static fields and properties are on the heap.
Writes to local variables seem to be faster than reads, IIRC.
The fastest operators seem to be the bitwise instructions, IIRC.
If running in 32-bit mode, try to work with 32-bit integers. If running in 64-bit mode, try to work with 64-bit integers.
Here's an example of a major, major improvement in performance
for(int x = 0; x < this.Width; x++)
{
for(int y = 0; y < this.Height; y++) { foo = bar; }
}
Much faster version (due to storing a copy of Width and Height on the stack instead of the heap):
Thanks! Your example is pretty interesting. Any reason why this is the case? In both cases, it is just accessing a memory location to read the value.
Are there compiler optimization heuristics at play here? E.g., for the local variable compiler knows that its value is not changing during the loop execution, so it can be pushed to register for faster access.
Register access isn't the issue. In the first example, this.Width and this.Height are accessing the Width and Height property of the current object. This requires a heap fetch on each iteration of the loop. There may be OS-specific nuances with automatic caching that I can't remember clearly enough to reliably mention.
If you can get rid of all heap lookups in your iterative loop, then you'll see a large speed boost if that was the bottleneck. Local variables exist on the stack, which tends to exist in the CPU cache when the current thread is active. https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...
Unfortunately, method calls in C# have a much higher overhead than in C and C++. If you must do a method call in your loop, be sure to read this to see if your method can be inlined. Only very small methods of 32 IL bytes or less can be inlined: https://stackoverflow.com/questions/473782/inline-functions-...
Having worked for 7 years on the .NET runtime team, I can attest that the BOTR is the official reference. It was created as documentation for the engineering team, by the engineering team. And it was (supposed to be) kept up to date any time a new feature was added or changed.
Coercion is a concern in communities where there's a power imbalance in the home. Consider, for example, an overbearing husband coercing his wife to vote in a particular manner, or just taking her ballot and voting it himself (and coercing her to sign the back of the envelope.)
An option considered in one state (Utah?) was to allow mail-in voting, but the mailed ballot could be invalidated if you stopped into a polling place and voted. It adds some complexity but reduces the absolute chance of coercion.
In the current system people can intimidate you as much as they like but they can't follow you into a booth. They can't inspect your ballot after you've marked it.
Only you know who you actually voted for. That's the point.
Unless, of course, you are asked to provide evidence of your vote with the camera you are likely carrying with you into the polling booth.
I'd guess mail-in ballots increase participation at the expense of some coerced ballots. Surely the net outcome is positive. On top of that, it helps to remove the chance for the use of intimidation at the polling stations. I remember that being an issue last election.
Somebody else mentioned that some areas have a system where if you show up in person, your mailed ballot is invalidated. That seems like a good safeguard.
The bottom line for me is that I think the option of mailing ballots increases participation. There may be an increased number in coerced ballots as well, but I think the net result is postitive.
Filming yourself could easily be prohibited. In France, I'm pretty sure that the voting officer wouldn't let me put my ballot if I was filming myself doing it.
Canceling mailing or e-voting ballots when voting in person can be countered by requiring that the person hand over his ID card for the vote duration.
The idea that a vote might have been bought is a dangerous one. It's dangerous even if votes buying are not proven. I'm still not convinced about e-voting for this reason. There should be absolutely no doubt on the sincerity of a vote.
That hilariously restrictive definition would have classified my old beast of an ex-military cargo vehicle as a "minivan", but I doubt you'd convince any onlooker to label a vehicle like this as anything other than a "truck":
Yeah - technically. But this whole discussion is about the consumer space, not about those purchasing commercial equipment for hauling tonnage or whatnot.
Even so, though - there are plenty of consumer pickup models which are used commercially, and more than a few can handle some pretty big loads (especially towing)...
Windows 10 has two update channels. Under "Windows Update" > "Advanced Options" there are two separate settings to delay "feature updates" and "quality updates".
I don't know whether earlier versions of Windows offer this level of control as I've got Win10 on all my machines. I just leave them on automatic update and yet I still only have two ears.
Really? On a discussion about a desktop OS? There are so many things going through my mind after your comment that I can't even start typing. Branch limit reached.
I'm using Android and I have the same concerns about it. I gave my Lumia away because it doesn't have the apps that I need and, again, I had the same concerns. My time is limited so I have to pick my battles. Mobile is a lost cause.
So I decide to give feedback about one of the two desktop OSs that I use (Windows and Linux). I don't give a shit if Apple/Google is going to make you insert a coin every hour to use the OS, I would still complain to MS if they decided to do the same.
I don't even know exactly what's your point. Feel free to ask if I write Microsoft with a $.
I didn't mean to offend. I'm just saying that if you think paying for an OS means you won't have data collection, then you're unlikely to find a smartphone that suits you.
My point, exactly, is that Microsoft is just another tech company these days. They're not the Evil Empire. At best, they are one of Many Evil Fiefdoms.
I understand that people are up in arms about data collection. And I understand that Microsoft tends to bungle any kind of public messaging about data collection. I'm just pointing out that every mobile OS, at least, collects tons of data.
> I'm just saying that if you think paying for an OS means you won't have data collection, then you're unlikely to find a smartphone that suits you
> I understand that people are up in arms about data collection. And I understand that Microsoft tends to bungle any kind of public messaging about data collection. I'm just pointing out that every mobile OS, at least, collects tons of data.
Why do you keep talking about mobile when the subject of this thread and my original comment is about Windows Desktop OS? Should we talk about IoT too?
> My point, exactly, is that Microsoft is just another tech company these days. They're not the Evil Empire. At best, they are one of Many Evil Fiefdoms.
You're triggered because I mentioned the word "trust"? When data leaves your computer, it's all about trust. I don't trust anyone so I don't want my personal data leaking in the first place. Most of the time ("like in mobile") there's no viable, realistic, practical alternative.
If the money I paid for my Windows license is not enough they need to rethink their pricing. That's all I'm asking. Allow me to pay for my privacy. I would quietly leave your ecosystem if I could but congratulations, you have me locked in because of my job and certain network effects.
Wow, sorry. No, no, I'm not triggered at all. I was only trying to illustrate the fact that there are times when we have no choice but to use an OS that uses telemetry.
I'll happily concede this thread and apologize for having brought it up. I really wasn't trying to start a fight. I was trying to give what I thought was an analogous example.
I know someone who works at Microsoft (and has for 15 years.) These cartoons are brilliant--especially the org charts for different companies--but things actually are changing at Microsoft.
Some divisions change faster than others. And some old reputations are very hard to shake. But Microsoft isn't the company it used to be. There are other companies filling that role now. Example: Even though all you ever hear about is "Windows is spying on me!" I can name three big tech companies that probably know more about you than Microsoft does.
Thanks for the downvotes! But seriously, do you honestly think that Microsoft is spying on your more than Google, Facebook, or Amazon? And do you think your Mac OS doesn't have the exact same sort of telemetry (or more!) that Windows has?
While this may change at any operating system update (It did for Win7 in a supposedly "security" update), my Mac seems to want to talk to Apple only to get AppStore updates, whereas Windows machines in my office want to talk to a lot of servers at Microsoft, and do so with hard coded IPs, and weird process names. Can you point to an Apple equivalent of this list? https://arstechnica.com/information-technology/2017/04/micro...
My browser has a strict policy that tells Google, Facebook and Amazon exactly what I allow it to; that sometimes causes things to malfunction, and I can live with that. Can you tell me how to do that with Win10?
Oh, and, no, Fedora, Arch and even with Ubuntu's worst blunder in this respect, do not spy on me even 1:10000 as Microsoft does.
Sure, I can accept that a platform can be doing more than a website and also that you have less control over a platform. Good points, both.
I won't comment on Android. I already got ripped to shreds above by trying to bring up the point that mobile is completely locked into two major players that act exactly like Windows 10.
Great feedback, thank you! Linux targeting isn't my area, but I know the guy who owns it would love to chat with you. If you want to help us make this better, would you mind sending me an email at apardoe @ youknowthecompany.com? I totally understand if it's not worth your time. But thank you regardless!
We--as an industry--are learning as we go with Spectre. There's a lot of data that went into the decision to release this switch with its current implementation. And this isn't the last iteration: note in Paul Kocher's writeup that we've asked for feedback as to whether anyone would use a switch that was sound (for known variants) but incurred very large performance regressions.
As evidence that this is an industry-wide issue, I ask that you reread Paul Kocher's post. Also, see Chandler Carruth's tweet this morning about this topic: https://twitter.com/chandlerc1024/status/963995521705627648. Chandler is the guy driving Spectre in LLVM.
Lastly, understand that Microsoft has a lot of customers who rely on our technologies. For their benefit it's often better to say less than to say more, especially when talking about security vulnerabilities.