Hacker Newsnew | past | comments | ask | show | jobs | submit | william-newman's commentslogin

"Trimming facts off at the first few levels of details makes everything feel, well, oversimplified."

But it's not just oversimplification of collective action, the plot was overblown even at the level of purely individual events in individual characters' lives. Arguably the greatest theoretical/experimental crossover physicist of the century takes the occasion of accepting his Nobel Prize to escape with his Jewish wife from Il Duce, and a few years later stands musing over a nuclear chain reaction in a secret project in Chicago? Puhleeze.


a few years later stands musing over a nuclear chain reaction in a secret project in Chicago? Puhleeze.

Not only that, but the reactor is in the basement of the University's stadium. Whoever wrote that one must've been reading Spider Man!


Good advice would be "so be particularly careful to check whether their arguments are logically sound and properly documented;" the original advice to skip past the quality of arguments and data to "so take their conclusions with a healthy grain of salt" is bad advice. The ad hominem fallacy isn't just unsound in principle, it tends to be pretty useless in practice. It can look reasonable at the time if there's enough groupthink, but would anyone like to nominate some cases where with two or more generations of hindsight we can agree that ad hominem arguments were a better guide to truth than simply addressing the technical arguments? And it can be wrong no matter how impressively much circumstantial evidence suggests that there could easily be a political motive: see, e.g., http://en.wikipedia.org/wiki/Robert_Conquest .


I wasn't saying their conclusions are automatically wrong, but merely pointing out that this one requires a little bit more scrutiny given the obvious political leanings.

It's like reading a science paper that questions something with a ton of evidence behind it. You just have to ask "really?" of all their claims before accepting it or rejecting it.


When I was in high school, I travelled by city bus while holding two different ultra-low-end programming jobs. I think you are underrepresenting the advantages of having a car.

If I expected to be earning and living on less than perhaps $12 an hour for an extended period of time, I would probably choose to go without a car, likely by moving someplace like Dallas TX (where I live now, and where the local wages vs. cost of living seem reasonably favorable) and renting an apartment very close to a bus line on a major suburban business artery. Judging from my high school experience, from many observations of foreigners living off their science grad school stipends, and from everything I've noticed in my years in Dallas, that'd work reasonably well. But doing without a car would constrain my life in important ways (including job options, education options, and general time efficiency) and I'd be very motivated to find economical ways to reduce the problem, e.g. sharing a car, or getting (and probably learning to maintain) an economical motorcycle.


The math in computer science may not be much like the usual idea of science, but neither is it what people ordinarily mean when they say "pure mathematics." It's much more nearly "applied mathematics," like statistics or signal processing or control theory: more likely to be interested in, e.g., the development of wavelets than in a proof of the Poincare conjecture.

Maybe it would've been more logical if the applied math end of the CS field had ended up with an applied-math-y name comparable to "statistics", e.g. "algorithmics" or "algorithmic analysis," and much of the rest of the CS field had ended up "software engineering" or "computer engineering" or "computational engineering" or "information system engineering" by analogy with "electrical engineering." But naming of technical fields is not necessarily systematically logical, sometimes because of old idiosyncratic reasons to avoid ambiguity with other pursuits: "astronomy" vs. "astrology," anyone?


Software engineering may be (currently) part of CS, but theoretical CS is more than just applied statistics. The halting problem, one of the most famous undecidable problems, is pure math and is CS. Lambda calculus is CS and is pure math.


I have no particular sympathy with any anti-vaccine activism that I know of. But I wonder how, other than by not being an important faction in the appropriate big political tent, "anti-vaccine denial" ended up on this article's excrement list along with Holocaust deniers, while "nuclear power denial" and "genetic engineering denial" didn't.

My impression is that political opposition to nuclear power plants, to nuclear waste facilities, and to GM crops have had at least as much economic impact as political opposition to vaccines. Thus, it seems to me that they shouldn't be left off this list because they're unimportant.

Perhaps the columnist thinks that the anti-nuke and anti-GM-crops political movements should be spared because they have achieved their political impact primarily by honestly making valid technical points? Granting for the sake of argument that that is a tenable position, then why ignore them? Wouldn't the anti-nuke and anti-GM movements make useful examples to clarify his position by comparing and contrasting? Wouldn't describing what is healthy and good about the thinking of the anti-GM and anti-nuke coalitions help us understand better what is so characteristically diseased and vile about anti-vaccine folk to justify grouping them with Holocaust deniers?

(I extended this comment, writing on an analogy to criticism of left-wingers as "politically correct," at http://naturalspiritofgoodcompany.blogspot.com/2010/05/on-ha... .)


It depends on what point you think you're making. (And perhaps you should make that point explicitly, so that I don't need to guess it before criticizing it, but if I'm guessing wrong, I apologize in advance.) If you are arguing out that an article from 1996 not timely, your argument is "relevant" but not necessarily correct. Note that the thread http://news.ycombinator.com/item?id=1062031 was on the front page when I first noticed this article. It contained a comment linking to the 1996 article. I don't know why the the 1996 article became a standalone thread instead of just a comment on the 1062031 thread, but it might be that someone who read the 1062031 comment found the 1996 article surprisingly worthwhile.

IMO, the 1996 article (and even more the 1991ish "Fable of the Keys" academic journal article which preceded it) remain timely because both QWERTY-is-superior and QWERTY-demonstrates-market-failure stories are remarkably hardy perennials, presented as established fact without acknowledging the Liebowitz/Margolis counterargument. (E.g., _Knowledge and the Wealth of Nations_ by David Walsh (2006), carries approving blurbs from _The Financial Times_ and _The Economist_ on its cover, and should be easy to find in a library, maybe even a chain bookstore. "Paul David" and QWERTY appear in the index many times, but Liebowitz and Margolis do not.)


My point is simply that the publication date provides some important context for the contents of the essay.


For a more academic-economics-oriented article on the same theme by the same authors, see http://www.utdallas.edu/~liebowit/keys1.html , from _Journal of Law and Economics_.

Incidentally, I would be very interested in a pointer to a comparably serious/academic rebuttal of Fable of the Keys. It has been a very long time since Fable of the Keys was published, and it has been a long time since I last tuned into the controversy, and at that time my understanding was that a rebuttal had long been promised but never produced.


You write "first, they are anecdotal." Fictional, in fact: "once upon a time..."

And you write "second, why are the programmers not taking control of management positions if they know so much better?" That question is outside the scope of the fictional dataset of the original post, but see "Jack and the Beancounter" in http://www.skotos.net/articles/BTH_27.shtml for a related fictional study.

And you write "but realistically, I think Alan created the best product for the company." Perhaps I am just being trolled. But at least this was a good excuse to refer to Jack and the Beancounter!


There is another level to this anecdote, and that is how HN readers respond to it. It seems that some of them identify with Charles, and some with Alan. Some identify because they are or act like Alan and Charles, or they choose the opposite because they had a bad experience with someone like one of them.

In other words, in this case the reader identified with Alan because that best describes how that reader works and acts, and they are just instinctively defending their position, even though, clearly, the parable portrays Charles as the better of the two. (Of course, that could be my bias, as I identify with Charles).


I agree that his point is valid for computers built on the ordinary substrate of classical switches (whether relays, tubes, or transistors). There are two very influential ways of representing computation, the lambda calculus and the Turing machine; when starting from switches, people (roughly) build something that looks like a Turing machine (with procedural programs) and then, if they want lambda calculus (with more functional programs) they use Turing completeness to emulate it.

That said, it's not 100% clear to me that this must be true for all physical models of computation. (And the existence of Shor's algorithm suffices to keep practical people at least a little interested in computational mechanisms which are fundamentally different from classical switches.) Does anyone know if there are any reasonably plausible computational mechanisms for which it'd be natural to start by implementing something that looked more like lambda calculus, so that if you needed a Turing machine you'd emulate it?


I don't think amateur-vs.-scientist-vs.-genius is a very good set of names for the distinctions he is trying to make, especially if you're going to reach back to Herschel's time. To pick some influential folk who fit into that period, consider Babbage, Darwin, Mendel, Pasteur, Boltzmann, Cantor, Semmelweis, the Wright Brothers, and Wegener. Consider also their rivals and critics. (Since I included a number of controversial folk, various of their rivals and critics are still remembered.) We can come up with generalizations about what the influential folk did right, but I don't think it works to say that, e.g., they were scientists while their critics and rivals were amateurs.

Or consider all the advice in an essay at the level of Hamming's "You and your research." It doesn't seem to me to be useful to try to boil down the multiple properties discussed there into boolean predicates "is this person a scientist" or "is this person an amateur," or even to try to choose particular points in those multidimensional property spaces as typifying "scientist" or "amateur" or "genius."

I wholeheartedly approve of people writing about some of the principles in this essay, notable the ones often summarized as "genius is 99% perspiration" and "a month in the lab can often save a day in the library." Those are very important principles, and very often people don't appreciate them enough. But trying to define the amateur/scientist/genius terminology on top of those principles doesn't seem very sensible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: