Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Security is Mathematics (daemonology.net)
73 points by joshrule on Feb 15, 2011 | hide | past | favorite | 46 comments


The best security researchers in the world are almost uniformly not trained in mathematics. Here's a short list of top-tier researchers. Spot the mathematicians!

* Mark Dowd

* John McDonald

* Alex Sotirov

* Dino Dai Zovi

* Charlie Miller

* Michal Zalewski

* Aaron Portnoy

* Dave Aitel

* David Litchfield

* Barnaby Jack

This doesn't invalidate the blog post, but I will go on to suggest that quite a lot of people with extensive formal training in mathematics either (a) have/had careers in software security with less spectacular results than e.g. Aaron or Michal or (b) have produced, despite incentive to the contrary, some really crappy code.


Leaving off the top cryptographers from a list of security researchers seems a bit disingenuous. However, including cryptographers might make Colin's argument trivially obvious, since so many of the top crypographers come from mathematics backgrounds. Therefore, I'll just add Rolf Rolles to your list, and note that he does come from a mathematics background.


Neither Colin nor Schneier was talking about cryptography. Cryptography and security are not the same thing.

Rolf Rolles is a very smart guy, but he's not a security researcher; he works in content protection and reverse engineering. Having said that: sure. That's one. One. :)


Charlie Miller

You mean the Charlie Miller who has a PhD in mathematics, right?

Also, did you seriously just write a list of top-tier security researchers which didn't include djb?


I didn't know Charlie had a math degree! That's two. :)

Rolf Rolles is more of a security researcher --- a lot more --- than Daniel Bernstein. But I'll concede it! He's a third.

We're at 3. Do you think I can't name 10 more notable security researchers, justifying each of them, to back up my point here?


Rolf Rolles is more of a security researcher than Daniel Bernstein

I think we'll have to agree to disagree here. But given our different focuses in security, it's not all that surprising that we have different definitions of "security researcher".


Whether they have "Mathematics" on their diploma is immaterial; I'd bet good money that most of these guys took some real serious math classes in school.


I presume you mean at university? There are no serious math classes at school.


a university is a school, why are you trolling


It might have been a genuine mistake - University and school are not really interchangeable terms here in the UK.


Indeed. Though I should probably have been more generous with my input. (And growing up in Germany, a school is something completely different from a university there.)


The author of the article was referring to a method of learning a proper security mind set. Its a classic self learning vs classroom based learning.....


How about Dan Boneh and David Brumley? They are security researchers with strong backgrounds in mathematics.


Computer security is a social science, so degrees in ethnography, epistemology, or organizational behavior are a lot more relevant than mathematics. Agreed that math teaches you rigorous thinking and questioning assumptions, but outside of the narrow areas of cryptography and systems analysis the specific skills you learn aren't that important for security work.


Right. I don't think thorough economic analyses come into play nearly enough when people think about security.


Also an excellent point. Ross Anderson started pushing the term "Security Engineering" around 2000 and economics were a big part of it.


Can you elaborate on the economics involved, please? Sounds interesting.


Economics is about the study of choice, at a deep level: the study of choice under conditions of scarcity, or with constraints (a definition like that is in most introductory textbooks). It's almost like psychology applied to crowds of interacting agents.

(Some people seem to get the idea economics is like accounting but vaguer, like a climatologist is to a weatherman or something. Macro economics is the bit that gets in the media most often, but it's also the most ideology-based rather than fact-based.)


One example: The universal truth of "DRM Doesn't Work" actually simply means "DRM isn't strong enough to for big companies to stand up to the ravenous appetites of everyone on the entire internet."

This is why big companies with content everyone craves have such a hard time with DRM -- there's just too much firepower arrayed against them. If you look closer, you find that "DRM Doesn't Work" isn't quite true. It's just not as strong as Phillips or Sony would like it to be.


The Workshop on Economics and Security has some great stuff http://weis2010.econinfosec.org/

Here's Ross's "Economics and Security" resource page http://www.cl.cam.ac.uk/~rja14/econsec.html


"Amateurs study cryptography; professionals study economics." - Allan Schiffman


Knuth is famous for the remark "Beware of bugs in the above code; I have only proved it correct, not tried it", and the implicit statement that a proof-of-correctness is not adequate to ensure that code will operate correctly is one I absolutely agree with

My boss told me a similar story of a computer science professor giving a cross-group talk in which he pitched the concept of formal methods to a group of physicists who, among other things, programmed collectors for particle experiments. (Supposedly this happened at Cornell in the seventies.) The CS professor enthusiastically and animatedly proved the correctness of an algorithm for solving a simple graph-coloring game and then asked whether there were any questions. One of the physicists raised his hand and asked, "How fast does it run?"

"That's the beauty of formal methods! Now that I've proved the algorithm correct, I already know it will produce the right answer. There are far too many possible inputs to verify correctness via testing, so there's actually no point in running it at all."


True for a very well-defined set of inputs.

The problem with writing secure code that works well is making sure that all inputs conform to your well-defined set... i.e. they are a subset of your well-defined set.

Compounding on this is the non-apparent dimensionality of your sets. A good example of this would be concurrency. If a function doesn't have an exclusive lock an an array of data it's going to manipulate, the set could actually have two dimensions (one being time), in which the array could change.

I got a C- in Analysis II. I needed a C to get a Math minor, but decided it wasn't worth it.


You can see that computer science is an applied science. A mathematics professor would have given a proof that such an algorithm exists or, more likely, that algorithms exist for solving several more general problems, and left it at that.


And unless you were especially lucky, the existence proof would be non-constructive.


I still hope, that somebody finds a non-constructive proof for P=NP.


Where the author's narrative breaks down is when he draws a parallel between security and proofs of theorems. The closer connection is between security and adequacy of theorems.

It is true that you need to make sure your software conforms to its specifications and that process does involve informal (or formal) proof-like reasoning, but that is only a small part of the challenge. This is the part that mathematicians would be good at, but other technologies are good at the this as well (type checkers prove weak properties, verification tools prove stronger ones). None of this requires a "twisted mind", just attention to detail.

The problem of writing software specifications that correspond to the abstract notion of security is the tougher task. In math, the closest analogy is figuring out what theorems people actually care about. While I don't know for sure, I'm skeptical that a math education emphasizes this skill. Security takes this skill a step further and requires Schneier's "twisted mind" to consider all the real-world ways that things could go wrong (including, among other things, the incentives that might motivate an adversary) and write specifications for secure, but useful, software.


Obviously I can’t speak for Colin Percival, but I think that the point of the article is quite a bit simpler:

I read this as saying that the mindset required to write proofs is similar to the mindset required to write secure software. The proof mindset is useful for considering “all the real-world ways that things could go wrong.”

I think the paragraphs about Knuth’s famous quote just muddy the water.


the mindset required to write proofs is similar to the mindset required to write secure software

Bingo.


My point wasn't intended to be as low-level as it came across. Perhaps a more clear restatement is that I suspect that the attention to detail that I associate with the "proof mindset" isn't quite the same thing as the "twisted mind" that Bruce Schneier talks about; so I'm not convinced mathematicians are more likely to have that skill.


Forget security, programming is mathematics. If you program, you should be doing this whether you're writing security code or not. Nothing saves me more keystrokes or debugging time than proving things about my algorithm before I code it, and I know this comes from math because my CS-only friends can't do it. I don't care if you want to go in to algorithm theory or get a software job at a bank, if you're a CS major, you need to take some rigorous math or you'll be at a disadvantage.


You can even prove that. It's called the Curry–Howard isomorphism (http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_isomorphis...). Guess who has a the highest bid on ads for `curry howard isomorphism' on Google?


I am not sure that I buy this. There are plenty of people who have internalized a painstaking and rigorous approach to problem solving, often from a young age. While many of these are also those who would excel in a mathematical environment, a mathematical education fails to capture any of the specific details of security.


I don't think he is trying to say mathematics is sufficient to be good at security, just that training in mathematics develops the right mindset for security.

I have a degree in math and can see how my attitude changed as I progressed. When taking my first analysis class I was sure it is no coincidence the word begins with anal. It took a while for me to develop habits of skepticism about things that seem obvious at first glance. That's the attitude I think he is describing.


When taking my first analysis class I was sure it is no coincidence the word begins with anal.

I'm going to steal that line, if you don't mind. :-)

It took a while for me to develop habits of skepticism about things that seem obvious at first glance. That's the attitude I think he is describing.

Yes. The attitude of "I don't care if this looks right; am I absolutely certain that it is right, in all possible universes consistent with my axioms".


true, but the same's true for any rigorous training -- philosophy, Talmudic studies, law, physics, ...


This article is correct, except it omits one important point... writing programs is harder than writing a proof. Especially security code. With sufficiently complex proofs it is often hard to find holes in the proof, but with security code (and code in general) there are ways to attack it, that just isn't doable with standard math proofs. There's no notion of "fuzzing" with proofs.

But in any case, the gist of the article is correct -- the rigor used in math proofs is the MIN bar for security code.


My take is that mathematics provides rigorously defined, leak-proof primitives and operations. Most of the difficulty involved in porting a piece of math to a program is in plugging the leaks in the abstractions provided by efficient machines. For example, infinite-precision real arithmetic (with no overflows, loss of entropy due to FP rounding mode, etc.) is assumed in mathematics, but is devilishly hard to get right (and fast) on fixed-width machines. Obtaining a passphrase from the user in order to hash it is assumed in the algorithm for the hash function, but in reality doing so without compromising the rest of your system can be much harder than coding the hash function correctly.

Things like side channel (e.g., timing) attacks add an orthogonal dimension of complexity to secure complexity that simply doesn't exist (or is rightly elided) in the related math.

Proofs only operate in the domain of the pure and infinite. The glue logic to interface that with the real world is what makes it tricky to write secure code, especially when errors in any part of any program can compromise an entire system (better hardware-enforced isolation between pieces of code is possible today, and indeed used, but isn't pervasive yet because performance is still king, IMO.


>writing programs is harder than writing a proof

Which programs and which proofs?


Any program function can be cast as a theorem (although vice-verse is difficult). Proving this theorem is easier than writing the corresponding program function.

And by easier, I mean that the proof is easier to pass off as a correct proof than the program is to pass as a correct program. Of course, writing an actually correct proof is just as difficult as writing an actually correct program -- for the most part. Of course sometimes, due to real world constraints in programs (like dealing with fault tolerance or races for perf) correctness in programs can become magnitudes more difficult.


I agree in the abstract, but ...

- what's the largest program you've written? - what's the largest program you've proved correctness of?

So in reality, meaningful proofs are much much much harder than writing programs.


what's the largest program you've proved correctness of?

But that's the point. With security code, while you may not prove the correctness of it, there's a black hat that's trying to find a counterexample to your "proof".

Whereas for 99% of proofs that are published in the literature no one is trying to prove that there are flaws in the proof. As someone who reviewed CS papers I would always try to really read at least one proof in the paper. Not skim, but really scrutinize it. Probably 75% of the time I could find a problem with the proof. Usually one that was easily corrected, but it was still wrong. But it took substantial effort to do this (which is why I only did one per paper and just read the other proofs).

Some recommended reading: http://www1.cs.columbia.edu/~angelos/Misc/p271-de_millo.pdf http://research.microsoft.com/en-us/um/people/lamport/pubs/l...


I'd argue there is a certain notion of fuzzing with proofs. Say I have a proof claiming that a certain function is monotonic. Fuzzing, in this case, is throwing a bunch of numbers at the function and checking to see if it's actually monotonic with regards to your inputs.

Obviously not all math proofs are regarding functions and definable numbers, but there's a similar concept of fuzzing for each different proof type -- it just might not be easy to automate, or state in a programming language.


There are some domains, like Geometry, where trying enough random numbers is actually good enough for a certain kind of proof.


But isn't this ignoring a holistic view of security and the fact that many flaws comes down to human errors, not only in code, but in procedures and organisations. It seems to me that by saying: security is math, there is a risk of ignoring that part of the problem.


Or more generally, everything is mathematics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: