Hacker Newsnew | past | comments | ask | show | jobs | submit | klickverbot's commentslogin

> I think this is a pretty small point to get hung up on. The rest of her article is perfectly reasonable.

The above isn't the only place that betrays her lack of understanding, though.

For instance, she confidently writes "Ion traps are used for example by IonQ and Honeywell. They must “only” be cooled to a few Kelvin above absolute zero," but this is just wrong; trapped-ion qubits do not, a priori, require cryogenic cooling. Yes, lowering the temperature can be useful for incidental reasons, as it improves the vacuum quality and reduces some technical excess noise sources, but this is simply an engineering choice. Many of the high-profile results in trapped-ion quantum information processing were in fact achieved in room-temperature systems. And even if one does opt for cryogenic cooling, the ~tens of Kelvin regime of interest here is incomparably easier to reach than the tens of milli-Kelvin required for superconducting qubits and other solid-state spin platforms (where those elaborate dilution refrigerator "chandeliers" are actually required to keep the qubits intact). In fact, in ratiometric terms, the temperatures of interest are actually closer to room temperature than to that millikelvin regime!

Like many physicists, I'd naturally be inclined to agree with Sabine Hossenfelder as far as her distaste of marketing hype is concerned, but in making authoritative-sounding statements without having the knowledge to back them up, and misrepresenting what one would hope she knows are the actual scientific facts in the service of a punchy script, she is hardly doing any better than those private-sector hype evangelists she ridicules. Beware of Gell-Mann Amnesia…


Essentially, yes; all of quantum key distribution (QKD) is generating a secret key which can then e.g. be used as a one-time pad. The novelty here is that we can do it with much fewer assumptions on how the quantum devices behave than in conventional QKD.


> My naive security architect view is, I get the impression the people doing quantum engineering and those working as cryptographers have a very narrow overlap.

You probably aren't wrong, but also note that popular science articles are probably not the best basis for judging this. :) A number of people working on QKD have done serious work on classical cryptosystems as well, although the overlap of that set with people working "in the trenches" of practical IT security is of course yet another topic.

> To do the data exchange, it's not encrypted to a key per se […]

I'm not sure whether this is what you are wondering about, but the actual data exchange is completely separate from the key distribution. Particularly for the entanglement-based protocols like used in device-independent scenarios, there isn't really any data exchange between the parties during the key distribution stage at all (apart from the classical post-processing steps such as error correction after the fact). Rather, the quantum resource provides random, but correlated bit strings at the two nodes. Only after the QKD protocol has finished is there actual data exchange using the secret key material, probably using the key as a one-time pad to keep the information-theoretic security guarantees.

Thus, trying to think about these protocols in terms of data transfer doesn't strike me as particularly natural; in fact, if the entangled state shared between Alice and Bob is maximally entangled, the raw bits obtained from the quantum devices are always going to be completely random.

The security proofs are indeed based on careful entropy considerations. You mentioned implementation details of classical cryptosystems. These primitives – S-boxes, etc. – motivate why we should reasonably expect cryptanalysis on such algorithms to be hard in practice, even though we know that they can't be secure considering information theory only. In the QKD case, however, we can make information-theoretic security statements without any reference to computational power. Thus, a security analysis will look at quite a different set of things: on one hand, whether the entropy accounting is correct, and on the other hand, whether the practical implementation actually corresponds to what that accounting assumes.


From a security perspective (and not a science perspective) we need to be able to make assertions about the security of a scheme, and provide some kind of evident proof for it. The entire history of cryptography is literally the story of persuading people they are protected by something they don't understand and can't reason about, and then having a backdoor into it.

Popular science articles aren't sufficient to reason about the science - but they are at least as rigorous as the product spec sheets people will make their security decisions on, so I'd propose pop articles are admissable in discussing the security of the scheme. It's not on the consumer to understand, but on the producer to demonstrate.

The issue with QKD right now is that the risk/benefit isn't there from a security product perspective. If I have something that needs quantum security, I necessarily don't trust a bunch of people who say, "trust me, it's science," as I am looking at where the risk goes. If I'm using crypto on classical computers, most of my risk gets diffused through standards bodies (NIST, essentially), and then my vendors, banks, insurers, etc. QKD and PUFs have the same problem, which is snakeoil risk.

The information theoretic security (as a function of entropy) of an algorithm is scientifically interesting, but when it comes to applying it to risk management (e.g. distributing accountability), there is a ceiling on that. Measuring security based on work or operations over a classical compute cost / complexity class, I agree, is an orthogonal concern with QKD, but security as defined by where the risk goes needs a definition it can reason about.

I agree it (the analysis) will look different, and if I were to equip my fellow security analysts with a tool, it would be to not be persuaded that their lack of a quantum physics background disqualifies them from interrogating the real security benefits of QKD proposals.


First author of one of the preprints mentioned in the article here (theory in Paris/Geneva/Zürich/Lausanne, experiment in Oxford) – happy to answer any questions! I obviously speak only for myself, not for any of my colleagues, and as a matter of course, I should also mention that publication in a peer-reviewed journal is still pending for these results.

One point to mention — which I feel quite strongly about, and I think my collaborators do as well – is that sweeping generalisations like "perfect security" are really not the point, and, if anything, have mostly done the field a disservice. Such statements do make for catchy headlines, and while there is a solid technical meaning attached to them (information-theoretic security), to a wider audience they might suggest that QKD replaces the need for careful security engineering, which is definitely not the case: if your processing nodes, say, leak out the generated key material via a classical side channel, no amount of theoretical security guarantees will save you!

Rather, device-independent quantum key distribution allows you to scale back the assumptions on your implementation to a well-motivated, minimal set. To me, this is already intriguing enough without the need for hyperbole!


The first sentence of your paper abstract is:

Cryptographic key exchange protocols traditionally rely on computational conjectures such as the hardness of prime factorisation to provide security against eavesdropping attacks. Remarkably, quantum key distribution protocols like the one proposed by Bennett and Brassard provide information-theoretic security against such attacks, a much stronger form of security unreachable by classical means.

This is not wrong, but in my opinion quite misleading. QKD is no replacement for asymmetric cryptography since it requires exchanging a secret key before the communication can take place. This makes it functionally equivalent to a symmetric stream cipher. So why do you mention prime factorization and cite RSA? The security of QKD should be compared to that of the best symmetric algorithms, not that of asymmetric ones.

I have seen this pattern in many talks and papers from the field. Maybe the issue is that the QKD community seems to have almost no overlap with the IT security community. In my experience, QKD people almost never talk about how you would actually use and/or attack a system in practice.


> QKD is no replacement for asymmetric cryptography since it requires exchanging a secret key before the communication can take place.

Your general point about QKD "promises" vs. practical IT security is well taken, particularly as I am much more of a general quantum physicist and spare-time compiler/infosec geek than a QKD person myself.

However, note that asymmetric cryptography doesn't really solve the authentication problem you mention either. If you don't want to place your trust in some sort of PKI, you are back to Alice and Bob having to meet first to exchange some sort of key material (e.g. their public keys) to later avoid impersonation. Given an authenticated channel, both QKD and classical public-key cryptography can construct a secure channel for messages of arbitrary length, but the latter only for computationally bounded attackers. Of course, this is not to say that a trusted PKI can't be a sensible assumption in practice.


All of this is correct. But I still think it is misleading to create the impression that QKD could be a replacement for RSA. Especially, since asymmetric cryptography and PKI are cornerstones of the modern internet. Why don't you change the abstract and cite Rijndael or something like that? Your work is a very impressive achievement, I am sure Nature will publish it either way.


QKD advocates have been doing this for ages, it's been pointed out repeatedly that they make dishonest claims and they continue to do so. Here's a paper from 2004(!) pointing this out: https://eprint.iacr.org/2004/156

It's not an accident, it's deliberate deception.


I believe you can achieve secure communication by combining QKD with an asymmetric signature algorithm (hash signatures being a particularly interesting choice), while that's not possible by combining a stream cipher with a signature algorithm.


> Rather, device-independent quantum key distribution allows you to scale back the assumptions on your implementation to a well-motivated, minimal set. To me, this is already intriguing enough without the need for hyperbole!

Would it be accurate to say it is scaled back to the level achieved by classical (non-quantum) cryptography?


> Would it be accurate to say it is scaled back to the level achieved by classical (non-quantum) cryptography?

Not quite. Classical cryptography of course requires the additional assumption that the computational capacity of the attacker is limited (at least if the amount of key material available is less than the length of the messages to be exchanged). QKD does not need any such computational assumptions. Looking at this purely from a theoretical perspective, I hope you'll agree that the ability to create new shared randomness "out of thin air" by drawing on quantum correlations, and to do so an information-theoretically secure fashion, is a pretty neat trick.

Now, if you asked me how likely it is _in practice_ that $THREE_LETTER_AGENCY has broken your cryptosystem to the point where they can feasibly attack it/have backdoored it, compared to the likelihood that they've bugged your devices in a supply chain attack or found any number of other ways to compromise the practical implementation, I suspect my answer wouldn't be much different to yours. Nevertheless, I still think it is interesting to explore additions to the cryptographer's toolbox that, in a very practical sense, have a rather different profile of assumptions and tradeoffs.


Oh absolutely, the theory behind QKD is fascinating! And I do think that some day there may be actually secure practical implementations, maybe even ones that are practical for more than a few niche applications.

But you mentioned the assumptions on the implementation, not on the underlying mathematics. The thing that concerns me is that QKD introduces additional hardware to operate, and there have been many demonstrations of weaknesses in that hardware that threaten the overall security of the system. With DIQKD you ensure that those issues no longer affect security (again it is absolutely remarkable that this is possible at all), but now you still have to concern yourself with all the implementation vulnerabilities that also plague classical cryptography. In that sense I mean that the implementation assumptions are now the same.


I always read perfect secrecy as a term of art with some technical meaning.

This protocol seems to solve the communication at a distance problem for which asymmetric encryption was developed but since then a lot of other uses for public key, e.g. signing and multi-party decryption and so on have come out of public key. Do you think there will be entanglement based replacements for these?


> I always read perfect secrecy as a term of art with some technical meaning.

That's indeed the case, but I fear the subtle technical definition here is usually one of the first things to go in the cycle of press releases and news articles, entirely too quickly giving rise to headlines that speak of “unhackable cryptography" or things like that. I've slightly edited my above post to clarify this, thanks.

> Do you think there will be entanglement based replacements for these [other protocols]?

One thing to note is that QKD is fundamentally a primitive to create shared, private randomness, not a communication channel – of course, the output can be used as the key for one-time pad encryption, but you might as well use it some different way.

For applications beyond that, I am really not an expert, but from what I know, people are looking into a variety of protocols, such as for leader election. There was a review article a few years back by Wehner et al., "Quantum internet: A vision for the road ahead" (https://www.science.org/doi/10.1126/science.aam9288), which highlights some proposals.

As for applications like signing, one aspect to consider is that quantum entanglement will, at least for another decade or two, always be much shorter-lived than classical data at rest. Thus, most practical quantum protocols will boil down to creating and making use of entanglement in a short amount of time, e.g. to initially establish some sort of shared secret, make a coordinated decision, etc.


> a lot of other uses for public key, e.g. signing and multi-party decryption and so on have come out of public key

To the best of my knowledge, multi-party decryption isn't really related to public key cryptography. Sending a message to a single recipient looks like this:

1. You write a message.

2. You encrypt it with a symmetric algorithm.

3. You encrypt the key to the encryption in step (2) with an asymmetric algorithm, using your recipient's public key.

4. You send them the combined message, encrypted ciphertext plus encrypted key-to-the-ciphertext.

5. They use their private key to decrypt the key-to-the-ciphertext.

6. They decrypt the message using the key you just sent them.

It's done that way, as far as I've learned, mostly because symmetric encryption is faster than asymmetric encryption.

But multi-party decryption is exactly the same:

1. You write a message.

2. You encrypt it with a symmetric algorithm.

3. You encrypt the key to the encryption in step (2) using the various public keys associated with each of your intended recipients.

...

So instead of a single-recipient message being a ciphertext accompanied by a header revealing the encryption key to the ciphertext, a ten-recipient message is a ciphertext -- exactly the same ciphertext! -- accompanied by ten headers, each of which is only readable by a particular private key. There's nothing about this method that draws on public key cryptography; if I've exchanged OTP material with each of ten people, I could send a multi-recipient message exactly the same way. (And doing so would be at least as valuable as it is in the public-key case -- doing things that way allows me to send a message of arbitrary length while only consuming a bounded amount of OTP material.)


can you link the preprint by chance? I can never find the actual papers from quanta...


QKD continues to be cryptography snake oil. Interesting for research, useless for actual real-life use.


There isn't anything special about functions; the original article does not describe this correctly. Rather, the big conceptual difference is the point at which the expression is evaluated – once for the whole program (`=`), vs. at each call site (`=>`).

I presume this got accepted because it fixes a well-known gotcha with default parameters in Python due to early evaluation, where, for instance, the dictionary instance in `def fun(args={}): …` would be shared between all invocations, leading to all sorts of fun bugs. This is especially pernicious as most Python programmers will know other languages as well, where this tends to be handled much more sensibly (e.g. in C++, D, …) and default arguments are evaluated at each call site.


I think this could have been fixed by always evaluating default args on each invocation. There is no reason why a default arg of all things should carry any mutable state. The people who would complain probably already had bugs in their code


Physicist here too. What exactly do you disagree with? The parent comment is sound – thermal imaging cameras typically under-read on shiny metal surfaces. Their emissivity/absorptivity at relevant wavelengths is low, and reflectivity is high. Thus, their own Planck spectrum is (approximately) scaled down by their emissivity, and consequently the radiation in the measured MIR band is mostly what is reflected, which tends to come from the room-temperature environment.

A polished piece of metal makes a shitty black body. This is also why shiny metal (foil) is used to curb unwanted radiated heat transfer everywhere from thermos flasks and cryostats to space probes. (The lower emissivity further improves the efficiency of multi-layer insulation.)


Let me try again. Assistant Professor of Physics here (not a grad student).

Yes, reflectance of room temperature aluminum at those wavelengths is pretty good (not true for all metals BTW). Yes, this usually makes it hard to distinguish thermal radiation and reflected radiation with metals. What are you trying to say though? That whatever comes off from a metal must always be a reflection coming from somewhere else?

> Thus, their own Planck spectrum is (approximately) scaled down by their emissivity, and consequently the radiation in the measured MIR band is mostly what is reflected, which tends to come from the room-temperature environment.

I don't know what you mean by "Planck spectrum is (approximately) scaled down" (as "Planck spectrum" only refers to thermal radiation and is generated in a separate process from reflected photons [one is governed by the conduction band whereas the other is governed by everything up to Fermi level] and you can't hope to suppress thermal radiation by simply shining random environmental light on a metal --there is no such thing as "scaling down" of thermal radiation unless you engineer such property), but there is just no way that 10 micron photons at that intensity could be coming from a room-temperature environment.

So your blanket statements about metals aside, the hot area in that picture is due to a very specific signal which can't be due to something that's reflected from the environment. No significant fraction of those 10 micron photons coming off from that localized the area around the CPU could have originated from the environment --assuming that those pictures aren't taken in a hot oven and someone focused the thermal radiation on to the heatsink to get that amount of intensity.

And as I mentioned, that's pretty trivial to test. If those 10 micron photons were coming from the environment as you or the parent comment suggest, the thermal camera would report ~60C even when you look at Pi 4 when it is cooled (again, this is something can use as "dark frame" and subtract off from all readings if you're trying to be more accurate). This is clearly not the case, though, as you can see in the video on the blog post.


  What are you trying to say though?
In the first image in the linked article, the thermal camera picture has a scale at the bottom. On the scale shown white and red are hottest (66°C) and blue and black are coldest (23°C). The CPU is black (23°C), and the PCB directly adjacent to it is white (66°C).

kees99 and klickverbot are saying it's unlikely the CPU is actually 23°C, especially given the author's statement the CPU was around 60°C, and that it's well known taking thermal camera images of things with different emissivities will produce inaccurate results.

kees99 is also saying, given that the thermal image doesn't accurately measure the temperature of the CPU, the article's statement that the metal casing helps isn't really warranted.


CPU is at 23C? Are we looking at the same image?

The CPU is the heat generator there, and is in contact with metallic regions around 60C (the red ring, if you compare to the real picture and follow the metallic bevels), where heat conductivity abruptly drops, which is what I've been talking about from the beginning. Since the heat is generated by the CPU and flows to the metal casing and to the PCB, the CPU can't be lower than 60C.

I agree that the reading for the inner region of the metal casing (which is not the CPU) must be off, and it's probably because the emission intensity there isn't strong enough and the camera software is mixing the emission and reflection when inferring the temperature (which gives physically incorrect results because the spectrum won't obey Planck's law, but the error depends on how different the temperatures are, and gets much stronger as they drift apart) rather than doing something like a "dark frame" subtraction (which is doable in principle).

Accuracy concerns aside, though, everything we see there (when you consider the physical context) supports the fact that the metal casing helps spreading the heat (which is obvious, it's a material which high heat conductivity, and there wouldn't be any need to put it there otherwise).

Even the 60C reading must be off by some for the same reason (given the regions appearing at around 70C), of course, but I assume OP doesn't care about that level of accuracy.


I agree it's true metal conducts heat better than plastic, and that a metal package is a conventional choice for that reason.

I disagree that the thermal image provides evidence of those truths

Does the image prove the CPU has a low temperature? No, the image reports the temperature inaccurately. Does the image prove the package has no hotspots? No, it wouldn't show hotspots if they were there. Does the fact the PCB gets hot tell us much? Not really, you'd expect heat to conduct from the package and balls to the PCB no matter what the package was made from.


If that cap didn't spreading the heat as well, what you'd be seeing on the thermal camera would be something that glows around 60C-70C (because clearly, the camera's software can more or less resolve the room temperature reflection from 60C thermal radiation), and the color would be more or less uniform in the region above the CPU. There wouldn't be such a strong observed temperature gradient.

Which is the evidence you're looking for.


Genuinely curious person here. What is the key reason to use photon size instead of wavelenght? To stress the quantifiableness of the radiation being capured by the thermal camera? To be honest it is the first time i've seen photon size semicasually mentioned in a conversation. Then again i only had physics in high school.


> What is the key reason to use photon size

Photon size wasn't used. Micron is an unofficial name for 1e-6 m, the lengths of 0.7e-6 m to 1e-3 m correspond to the wavelength of infrared radiation:

https://en.wikipedia.org/wiki/Infrared

So "those 10 micron photons" there mean "the photons of the radiation with the wavelength of 1e-6 m."


Ah! Makes so much more sense, thanks!


They're the same thing: de Broglie wavelength would be the closest thing if you want to assign an effective "size" to a particle.


That actually makes a lot of sense after a bit of Wikipediaing. Thank you for broadening my knowledge.


Reproducible in Firefox 67 on macOS as well. "DOWNLOAD" is only in one line for very wide windows, and for my default half-screen-wide window size (960 px), the main body title renders as

NATIONA

L

PARK


There seems to be something very strange going on with the way you are building LDC.

On the machine I am typing this message from [1], building DMD using DMD in optimized mode takes 58s, plus another 2s for druntime and 7s for Phobos (the two parts of the D standard library, for those not familiar).

A release build of LDC using LDC (CMake/Ninja), on the other hand, takes about 90s, which includes several versions of the runtime (only one is built for DMD by default), the JIT support libraries and a few ancillary tools. This is with debug info enabled, disabling it speeds up the build a bit further.

Since these are different codebases and the LDC build makes better use of the available cores, these are obviously not directly comparable if what you are talking about is compiler performance. However, a release build of DMD using LDC on the same machine takes about 45 seconds – i.e., it is faster and produces a faster binary.

Take these numbers with a grain of salt, obviously, as this was hardly a controlled benchmark. Your statement just conflicts with my experience working on both compilers, and the timings hopefully illustrate why.

---

[1] A 2015 MacBook Pro (i7-4980), so hardly anything out of the ordinary.


Nope, not photoshopped. It's a single exposure; the apparatus is illuminated using flashes.

The latter made lighting the shot in a controlled fashion a bit easier than if I had used continuous sources – you'd be looking at using a torch with a bunch of filters or a computer monitor on the lowest brightness otherwise.


Photographer here. The amount of attention this has received has caught me a bit off guard – this is really just a somewhat pretty picture of what is a standard technique in physics by now.

I'm putting together a short post with answers to some of the most commonly asked questions, but in the meantime, check out this great comment by a well-informed Redditor:

https://www.reddit.com/r/interestingasfuck/comments/7x4o27/p...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: