It's pretty accurate for most software engineering roles out there. You whiteboard code to solve theoretical problems with algorithms you learned in college. The real job typically involves trying to get code from various languages, frameworks and platforms to work together. There may be one or two times where you actually use some obscure algorithm but that's it.
I don't know what to call it, maybe the contrapositive, but if you do this for a living it's kind of surreal when you interview for a dev position and fail because you go too in-depth with underlying details.
I've been told, "you're a bit too paranoid about security, we'd recommend you memorize some of the answers to interview questions on StackOverflow if you're really interested in this."
The inverse is true in a lot of technology companies sadly. I once worked with a "network security expert" who got the job after 2 years as a front end web dev and zero real training in network security. He knew what SSL was so got promoted...
I was a Java Web Gui developer, typical stuff, so the architect of the project asked me to implement better security by reversing SHA512.
Specifically SHA512 he said, dont use SHA1 or SHA256 thats not good enough. And he explained several times how the security solution work, it would be based on reversing a hash function. As I shrugged it off, people say weird things sometimes, and offered a real solution he kept pushing for his solution. Weird but anyway maybe he just means what I mean but is using different words, so I implement a real solution, and then days later he questions my judgement and the solution because I didnt use his. Thats when I said "you cant reverse a hash function, in fact thats the point of it" and then point to the diagram he made see there, thats not mathematically possible. His title was Security Architect.
Does Google still do this too? Last thing I knew Google would stick you wherever they decided, not necessarily where your strengths were and where you wanted to be.
I was under the impression (possibly wrongly) that you could relatively freely change teams at Google assuming you weren't entirely unsuitable for the job, and were reasonable about not changing team too frequently and actually getting stuff done.
From what I have heard, "not too frequently" means once every 18 months, so some people have complained about sticking on a team they don't like for a year and a half.
I TAed a Computer Systems course at a different university, and it was a great exercise to get the students to really understand what was going on in their programs - and to force them to use the powerful tools (disassembler, debugger) available to them.
I imagine this sort of reverse engineering is common for malware analysts and people who work in the AV/security industry.
Personally I work at a software security shop, where we aim to prevent this sort of reverse engineering. So we also end up doing a lot of this to be familiar with attacks, test our own protection, debug issues etc...
If people want to learn more about this kind of stuff, tuts4you is great:
https://tuts4you.com/
Except for the security industry where I guess this would really be your day job, any job with embedded programming close to the metal is not really far from this.
Granted, you don't have an adversary so you won't find any anti-debugging tricks dropped into the code, although at times you'd be forgiven for thinking they had.
> I'm not working for that company now , I moved to Barcelona.
> i live in Barcelona and have a great life
In both cases living in Barcelona is positioned as the contrary of working -for that company-, so I'm confused. Are you working? Is it a security-related position?
Stuff like this deeply fascinates me. However, I can't help but feel like a lot of it could be improved with the help of some automation tooling.
Is there anything out there for like a "language" of sorts (or API, etc) that can automate some of the debugging things required? Like I imagine very "active" debugging and code modification like "When we are at this location, and the past few instructions exected were X, Y, and Z at addresses A, B, and C, pop this item off the stack and push this hard coded value on the stack" Things that would take forever to do manually, especially when called in a loop, but would be fairly trivial to formalize into a programming language
A lot of debuggers and disassemblers do exactly that. They're getting smarter every day, but they still take some knowledge of assembly and lower-level computer science concepts.
IDA Pro [0] is a good example of an excellent debugger with plentiful features and productive disassembling tools.
gdb actually has amazingly good python scripting built in if you're using a reasonably new version of it. Redhat lead the charge[1] with what they called 'Project Archer', which was merged into gdb mainline as the python scriptability.
Hehe.. I worked for Numega back then. I did tech support and I still remember how often we got tech support requests from people and companies wanting a way to prevent SoftIce from being used to crack their software. So many times I tried to explain to people that there just wasn't really a good way to detect that it was happening that couldn't just be circumvented by the debugger.
One thing to keep in mind is that there are only a handful of really good disassemblers and that anyone semi serious about keeping you away can throw in code that will confuse and/or crash them.
Disassemblers aren't at all hard to write. Tripping up IDA might be a decent bang-for-the-buck countermeasure, given how straightforward it can be, but it's a speed bump.
It's very possible. Recently I had to manually patch code that was (in assembly) a conditional jump on conditions that at runtime were always true or always false, so the "conditional" part was a red herring.
The fact that the conditional was there confused IDA such that it miscalculated the stack usage for many of the functions in the binary and refused to designate them as procedures (which can be decompiled).
The telling part was that these conditionals had no other purpose other than to confuse IDA, so you could see the intent was malicious.
That's just one example. You REALLY have to know what's going on with the hardware and the intentions of malware authors so you don't blindly accept what your tools are telling you. That's what makes the difference between someone who can do reverse engineering for a living and someone who is good at it.
EDIT: One more thing...debugging malware is a last resort; it's very easy to behave one way for a debugger while doing something completely different elsewhere. If you start with automated detection/debugging tools you spend most of your time working around debug detection.
If you'd like to know more my email address is in my profile.
That's very interesting. Thank you for correcting me. I'd like to know more, but everyone else probably would, too. Personally, I'm curious about the backstory behind that particular piece of software, and also about any other tricks you've noticed.
The backstory is pretty simple; I analyze malware and this was a sample. It's difficult to talk about publicly because if you reveal too much, it's a chance for the malware authors to recognize they've been made and change what they're doing.
What's interesting is that if you've seen enough samples, you can make educated guesses about the authors, their intentions, and level of competence. In this case, the authors were obviously aware that someone might try to reverse engineer the software so they threw that little red herring in. I have no idea why, and it was only in certain functions and not others. But you do know the authors had a clue about IDA and similar static analysis tools and were trying to make it more painful to analyze. It certainly wasted a couple of hours of my time.
Fortunately the obfuscations make software like that easier to detect, so it's a balancing act the author has to play.
If I ever stop analyzing malware there might be a very interesting blog series on all the boneheaded mistakes malware authors make when they obfuscate their code. I could teach a six-month course on what not to do with crypto just from all the approaches I've seen.
I played around in the cheat scene as a kid and what you are describing sounds like someone who didn't know what they were doing either a) copy-pasting from or b) using a toolkit provided by someone that did.
"If I take a letter, lock it in a safe, hide the safe somewhere in New York, and then tell you to read the letter, that’s not security. That’s obscurity. On the other hand, if I take a letter and lock it in a safe, and then give you the safe along with the design specifications of the safe and a hundred identical safes with their combinations so that you and the world’s best safecrackers can study the locking mechanism — and you still can’t open the safe and read the letter, that’s security." -- Applied Cryptography, Bruce Schneier
If you distribute a program with broken security, it needs testing and fixing. If this was a real program, as opposed to an exercise, it would have gotten a security advisory.
So, the only failed "test" here would be writing a reverse-engineering-resistant program and expecting that to provide "security", and that's less a test of morals and more a test of competence. But as an exercise, written to be intentionally buggy, this was quite good.
Now, a good question to ask before this exercise would have been to ask the inverviewee to discuss the idea and implications of prompting for a password in a client-side application; look for them to explain public/private key cryptography and security versus obscurity.
> but trustworthy people do not walk around testing doorknobs to see if they are locked.
Yes, I agree. They do not go around testing doorknobs unsolicited and without permission. Which has absolutely nothing at all to do with this story, since he has both permission and was instructed to do so by his interviewers.
He was not asked to crack the NSA's database on metadata for phone calls. They are command line examples.
I downvoted you for unnecessarily swiping at the candidate saying: "This was a test of morals, and you flunked." You state this absolutely when in reality it is almost certainly a subjective minority opinion you hold.
on the Gaussian Bell Curve of morality, as I pointed out myself about myself I lie at one end of the spectrum (two standard deviations above the mean, as I see it). Every population will have such a distribution. You are downvoting an outlyer because you disagree with him
I didn't care what your morals are, it's just bad interviewing. But, if you really want to bring up your morals, why are you deploying underhanded interview tactics? You let applications rest on the implied assumptions about a professional environment. Someone is extending you the professional courtesy of assuming that (if hired) you won't ask them to act immorally; you use that against them to walk them into a trap. How does that fit into your moral system?
I am downvoting you for unnecessarily swiping at the candidate by forcing your minority opinion on him. "This is a test of morals, and you failed" is a harsh judgment that only you and maybe a few other crusaders hold. Do not authoritatively state it and expect others to support you.
Don't make it look like you are being persecuted. You are the one lording over a minority opinion over someone in an attempt to persecute them.
Can you tell me what company you work for where you may be in a position to hire people? I'd like to make sure I never waste my time by applying there.
They didn't ask him to crack Photoshop or AutoCAD or anything... they're freaking crackmes. It seems like quite a sensible way to approach such an interview since it takes significant time, can be done overnight on their own workstation and isn't a huge requirement like "write us a game entirely in assembly".
There is nothing immoral about cracking a crackme file, which is the very reason why the file was created.
I don't think you're wrong, The interview tactic is not useful for testing morals, your interview is testing the interviewee's aptitude for identifying underhanded questions (and/or their ability to sidestep a pointed questioning on a skill they're weak at).
But if somebody says "we want to hire you, pick this lock", your first response has simply got to be "wait, we need to discuss this."
Is the lock attached to your competitor's back door, or did you just place a lock cylinder on the meeting table? Remember, you're hiring a locksmith.
You've provided the applicant an ethical and legally safe environment to apply their skillset and tasked them with proving their skillset. What do you expect to happen?
How many of your everyday business skills can be used immorally? If you preface every demonstration or application of your abilities with a moral clarification, my immediate thought would be "The lady doth protest too much, methinks"
When I find myself in the role of employer again, which could be soon, I'm going to use this to test potential employees. The people I hire will say "I know how to use a debugger to reverse engineer machine code, but I don't crack passwords" or some variation of that.
There is a good chance you would hire someone that doesn't know how to do it.
If you don't want them to do it, then don't ask. This, IMO, is the wrong way to test morals.
I don't see anything immoral about it. It was not a real application, it was not causing damage to anyone and it was totally related to the job position.
I have no idea why you think a security researcher shouldn't know how virtual locks work and be able to penetrate them, in my mind it seems a reasonable skill test, and so why you think this is immoral is completely beyond me.
Would you have the same objections if the test was a complicated SQL injection?
You're being downvoted because your comment is, to most of us, a complete non-sequitur. Everyone understands that "cracking" this program is a harmless game. Successfully doing so, however, demonstrates that the interviewee has a decent grasp of various systems concepts, and is able to apply them.
But I do think this is a serious issue and I'm astonished at the sheer number of you who cannot allow discussion of it and will not allow your beliefs to be questioned.
We _were_ discussing it, you got a few downvotes, so what? Disagreement is a form of discussion, had you wished to iterate on the explanation of your stance I would have gladly continued until we distilled the conversation down to the core bits we don't see eye to eye on. Maybe neither of us would be swayed, but at least we'd see where one another were coming from.
I said this before but deleted it because I thought it was a low blow, but in light of your edit I'll reiterate: the problem with this conversation was not your moral stance, it was your arrogance.