Hacker Newsnew | past | comments | ask | show | jobs | submit | ef47d35620c1's commentslogin

I've had very good reliability from my SSD drives as well. Some have been running almost continuously since 2009.


I ran a 60GB SSD as my Windows machine's system drive (with pagefile) for four years before it started showing problems, and that machine saw a few hours use almost every day. It was >90-95% full for most of that time.


I'm learning to do this again. At first, I had to make myself, but now I really enjoy it. I typically walk during lunch and in the evenings, just for the sake of walking and thinking. I find that it reduces stress and helps me to think more clearly.


tmux is included in OpenBSD base too. So if you use OpenBSD, it's there by default.


You could also remove the hard drive. It's very easy to do and would prevent any accidental write.


I very much agree. When that happens to me, it's usually because I'm thinking about the issue in the wrong way because of how I have read (or perceived) the code and how I think the code should be. Wrong notions, assumptions, etc. Getting away from the code, allows me to carefully focus only on the problem. Then things become clear.


The 'it depends' answers are good and I would not cite them as evidence of a problem. You employee people who understand what they are talking about and who also understand the importance of an accurate answer. They are probably very good engineers.

How does X close a TCP connection... it depends on the operating system in question. What cipher does my browser use when talking to X website... it depends on what ciphers are supported/available and how the client/server are configured. Which router do these packets go through... it depends. Is my password secure... it depends on the string you chose, the hash type used to store the password and who the attacker is and what their resources and time frames are.

There is hardly anything absolute in technology/software. And, people who want an absolute answer are only indicating that they do not understand the fundamental complexity issues that we deal with as technologists.


That didn't seem his point. He didn't say at all that the person providing the answer was not competent or being imprecise or whatever negative. The example he gave was different that yours: it was about the software the team is responsible for and about the fact that technical debt has been formed. In dysfunctional organizations, dysfunctional software happens despite of the bright people.


"In dysfunctional organizations, dysfunctional software happens despite of the bright people."

A variation of Conway's law. https://en.wikipedia.org/wiki/Conways_Law


The potential corollary: "In functional organizations, functional software happens even when the people aren't that bright."

Exciting or sad, or both?


>"In functional organizations, functional software happens even when the people aren't that bright."

It could happen, but I've never seen it, nor have I even read about it the literature. The sum conclusion of 20 years of software engineering literature is that two things matter: the quality of your programmers and the stability of your requirements. Of those two, the first matters more than the second. Get those two, and it doesn't matter what methodology you use, you'll have functional and elegant software. Miss those two, and it doesn't matter what methodology you use, your software will be crap.


> the quality of your programmers and the stability of your requirements. ... Get those two, and it doesn't matter what methodology you use, you'll have functional and elegant software.

I've always believed that when excellent programmers advocate whatever methodology they use (e.g. Beck et al.) they themselves are missing the fundamental fact that they could produce excellent software using three sea shells.

I'm really glad that those excellent and public programmers don't have a more perverse sense of humor.

... or, maybe they do ...


The 'it depends' answers are good and I would not cite them as evidence of a problem.

They might be the best answers you could get in that company, but it would be better still to get:

"Use this library feature"

"Ok; are there any weird exceptions?"

"No, this is it. We rolled all that company's accounts into this system when we merged, all old accounts in when we upgraded, and planned and designed this to be good enough for the high risk accounts too. There's only a handful of other accounts kept separately and there is no way to verify those from here, by design."


[deleted]


The post was deleted, but it implied that all software closes TCP with a FIN. Some software sends a RST, no FIN.

The issue is that the answer is relative. There are no absolutes. So yes, it does depend and yes there is always an explanation as to why it depends. Just because something works this way on this system does not mean it works the same way on that one.

To believe that something is the same everywhere on every system for every user is foolish at best and reckless at worst. And, people who think that (typically non-technologists) think that they have the answer as well... let's rewrite it to be the same everywhere. This is when all the engineers dust off their resumes and bail.


Replace answering "it depends" with actually providing the process to get you the answer.


I don't know much about systemd, so I can't say either way.

So long as I can still get syslog style plain text logs, then I have no objections. Unix and text log parsing is unparalleled. If simple text log parsing and manipulation is removed (all binary with only xml or json), then I'd be very opposed to systemd.


The short story is systemd's logs are "binary", but they have tools to transform them into whatever plaintext format you like.


Also, I believe that only binaries can be FIPS certified, not source code, so there are times when one has to use an old, out-dated openssl binary in order to be compliant.


OpenSSL FIPS certification (#1747) is for source code, not for binary. This is highly unusual indeed, but it is not the case that only binaries can be FIPS certified.

On the other hand, you can't change the source without losing the certification, so it doesn't actually matter.


So any change to openssl fips has to happen as compiler patches?


No, changing the source means you're not using FIPS-compliant source so you're breaking your terms.

This is why you might have to use old versions of OpenSSL for FIPS compliance - not all versions might be certified.


I think the GP is talking about a trusting trust attack on OpenSSL: Change the compiler to compile OpenSSL differently, rather than change the source itself.


I guess it begs the question (FIPS mode seems to fail the "talk to a cryptographer rule"): why don't/aren't sec folks more involved to assure standards are meaningful? Was this a NIST-driven process or was it open to public comments?


That's part of the culture. It's nothing personal. Don't take it to heart if what they say offends you.


Would a possible solution be to check how many people are using the random generator...?

One preson may have multiple processes reading from /dev/urandom.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: