Hacker Newsnew | past | comments | ask | show | jobs | submit | laburn's commentslogin

It seems to you what?


Probably that letting the slow-GMO survive in a natural environment for hundreds of years does wonders for ironing out the really bad bugs.

I'm not anti-GMO but i've been more wary ever since I realized it will inevitably be used to make better citizens (and not by making them better humans).


It depends. There's a number of dog breeds with known genetical issues.

GE is a double edged sword just like nuclear energy. With great power comes great responsability.


Those are mostly innovations of the latest century or so, bred "scientifically", regulated by dog breed foundations and chasing dog-show metrics.


Attribution of malware can be difficult and the lack of details like who was targeted and missing plugins don't leave enough information to guess who might have been interested in developing this.


I regularly look at ARM software and firmware. There are certain things that IDA does better, like FLIRT analysis, which I’ve missed since switching to Ghidra.

Ghidra also currently lacks support for certain ARM instruction decodings so you can get odd looking disassembly. That isn’t to say you can’t add it.

Ghidra has been very nice for reversing C++ code. After filling in most of the members for an object field it makes reversing other subroutines that use the defined type simpler, IMO.


Thx for your opinion. What about automatic ARM-Thumb swap on jump instructions? I tried Radare 2 before and it was so hard to work with ARM firmware. If Ghidra supports a proper ARM-Thumb switch, it sounds like a greate alternative to IDA for me.


I don’t normally see ARM with thumb code so I couldn’t tell you. I am fairly certain it is supported but to what degree I am uncertain.


Mind if I ask how to get started reverse engineering software?


In my experience, find something you want to find out about a software, and then do that. For example I wanted to write an autosplitter (used for speedrunning) for the witcher 3. So I first had to figure out a plan. I found out that witcher3 is heavily based on facts, and fact-changes. If you progress a quest, thats an entry to the fact-database. I went looking for the method which adds facts to the db, and after a while I found it. Then I hooked it (redirected it to a custom function, which calls the original after my custom code executed), and wrote the rest of the autosplitter.


It's open-source, you shouldn't need to decompile it. :)


I don't understand how this differentiates itself from ARM TrustZone.

You can already create a system with a dedicated HSM and run your own trusted operating system using the features of the processor. With Intel SGX you are somewhat stuck with using the HSM provided by Intel. With the BSD-licensing aren't we left in the same place as ARM processors, with the exception of producing a RISC-V processor being less expensive with regard to licensing?


Is TrustZone open source? It's in TFA, the differentiator is that this is open source. Enclaves per se have existed for a long time now.


Making something closed source does not make your product more secure, it only makes it harder to look at. Determined people will still try to understand how your software works in order to accomplish their goals.


Security through obscurity is a valid and effective tactic -- it's simply ineffective on it's own.


To reinforce your point, see all pre-modern crypto techniques. It cannot be argued that they worked, and they were all certainly security through obscurity.


Aren't most examples things where it didn't work? The most famous case is the German "Engima" device from WWII (hardware- and 'software'-based, but cracked and readable for years before the Germans knew because they believed it was both obscure and effective) but it's wholly possible that most schemes were broken eventually. Keeping an obscure system secret is really hard, especially against a motivated attacker.


Enigma wasn't hard through obscurity. The Allies had the Enigma machine long before they were able to crack it. It was hard because with the equipment of the day, it was pretty much unbreakable in the same way that prime-number based cryptography is today. It was only A. Turing developing a completely novel kind of machine (https://en.wikipedia.org/wiki/Bombe) that enabled the decryption. In the same way that quantum computers could break the current cryptography easily. It's not obscurity, it's assuming that some (mathematical) task is hard.


Don't forget about the Polish. They too broke the encryption before, but then they were invaded, and no precision machinery was available to increase the number of rotors to 10. https://en.m.wikipedia.org/wiki/Cryptanalysis_of_the_Enigma Turing did it too, independently.


Didn't know about that! But it seems they were able to break the system only while the Germans where sending the settings of the plugboard in the header of each message. Once that was changed in the early 1940, their decrypting techniques wouldn't work anymore.

Btw, from the wikipedia article: "lazy cipher clerks often chose starting positions such as "AAA", "BBB", or "CCC"" Weak passwords were an issue already back then.


I went to Bletchley Park a couple of years ago. It's a very fascinating place. I remember hearing stories of code breakers who could infer that a piece of plaintext was all JJJJJJJJJJJ simply because, upon looking at the ciophertext, it contained no J (relying on the fact that no letter would ever encrypt to itself in Engima, because of the reflector). Indeed the Poles don't get enough credit for their contributions. And yeah, virtually all encryption was similar to Engima back then: the Allies too had a similar machine. I believe traitors sold secrets or Engimas were captures on U-boats and so on, so security through obscurity wasn't really a thing back then either.


From what I know Turing didn't do it independently: the Polish sent their work to England about two months before being invaded, what Turing did is improve on their work so it could scale (the Germans added more rotors so the Polish decrypting machine wasn't helpful anymore).



I would consider the Enigma to be a very good counterexample to security by obscurity. Even after capturing a few of the apparatuses, it took a lot of mathematicians and engineers a lot of time and effort to build something that could decipher messages before the key became obsolete.


Enigma security didn't rely on security through obscurity. Having the machine didn't enable the allies to decrypt the messages. It relied on the secret of the... secret keys and the monthly key books.

It's also quite interesting to see that the Polish cryptanalists were able to reproduce the enigma machine used by the german army without even having seen one. They were able to deduce the number of rotors, the wiring, etc.

What in the end doomed the Enigma was the fact it was more a kitchen recipe than cryptography based on solid principles. It was a smart recipe for the time, but it had flaws (like the fact a letter could not be the same letter once encrypted). In some regards, most of our symetric encryption algorithms today feel a bit that way (with a lot more external scrutiny from experts however).

Even in WWII, I don't think that security through obscurity was considered as an absolute barrier. It's more in line with a "defense in depth" pattern. It gives a little more work to your adversary as he now has to figure out how your encryption works before breaking it, but it's not expected to last for long.


The Enigma was sort of on the cusp of a modern crypto technique IMO, not to say I know that much about it. I was more referring to other techniques like wrapping a message around a dowel or the Code Talkers from WW2.


Yes, the Zimmermann Telegram is a perfect example of security through obscurity.


This is not really a useful response.

The trivial counterexample is that all modern crypto techniques rely on keeping a key, or part of a key, secret. That's security through obscurity, and you've just stated bluntly that obscurity never works under any circumstances, right?

What you want to do instead is talk about tradeoffs. Talk about how much information you need to keep secret in exchange for a given window of effectiveness, and state a preference for systems which provide longer windows of effectiveness while requiring less information (such as only a key, or part of a key, instead of a key and an algorithm) to be kept secret.

Also, take care with your argument about "pre-modern crypto techniques". Some of them remained effective for centuries after being invented, which is a far cry from your "cannot be argued that they worked", and not necessarily a favorable comparison with many modern techniques, which are lucky if they make it a couple decades before being broken.

(also, of course, all cryptographic systems eventually get broken, which is why every so often we switch to new algorithms, longer keys, etc., and you seem to be arguing that any system which eventually gets broken is a system which never worked, and that's also wrong)


We don't allow you to change the definition of "security through obscurity" just like that!

Using a public algorithm with secret key is BY DEFINITION _not_ security through obscurity. On the contrary.


In context it was fair because I was responding to a situation that was already playing with the definition, and once you allow that you have to allow taking it all the way.

Unfortunately, I started my reply to the wrong comment and didn't notice until after I'd posted it and it was too late to edit/delete.

tl;dr too many people have a knee-jerk "security through obscurity!" reflex action to things they don't like, and I have a reflex action of yelling at them about it, which sometimes misfires when I don't take care to reply at the right point in the thread.


Agreed. Kerchoff's principle isn't really up for debate.


Which reminds me of how this site was hacked:

https://news.ycombinator.com/item?id=639976


Hashing passwords is security through obscurity by that reasoning. That does not make them less of a security function.

Just something to keep in mind.


"Security by obscurity" tries to keep the way that your encryption method works obscure, it does not try to keep a specific key obscure.

For example, if your way to encrypt works like this:

1) Shift all letters along by 5.

2) Cut out every second word and put them behind the message in order.

3) Whenever there's an f, s or y in a word, double up that word and shift the second word's letters by 7.

Then if your enemy figures out how your method works, you have to come up with a completely different method.

The opposite to security by obscurity would instead once come up with a method that entirely depends on a key. You can then publicize that method (or not), and if your enemy finds out your key, you just choose a new key and you're fine again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: