Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Worst Argument in the World (unsw.edu.au)
23 points by benhoyt on June 28, 2009 | hide | past | favorite | 15 comments


The whole denial of the possibility of empirical knowledge strand in modern philosophy has long rubbed me up the wrong way, but possibly only in a way that is clear to programmers.

The denial usually derives from a distinction between the mind and the world outside it; things-in-themselves from the outside world can never be perceived by the mind, because all perceptions are mediated by sensory organs, they are all filtered one way or another.

But this seems suspect when one considers a simple physical computer as as an example of a simple mind. We model the "knowledge" of the machine as the state of its "memory", however we choose to represent that memory - flip-flop circuits or magnetized rust.

That "knowledge" changes as the machine's I/O manipulates the state through long chains of physical, mechanical operations, and looking in from the outside with our more sophisticated eyes we may see that the knowledge imparted by the "sensory I/O" may be more or less true. If it's less true (as a digitization, it'll almost always be an approximation), then the I/O or programming may have bugs; but if the I/O and programming are functioning well, is it true to say that the machine has not acquired true knowledge from its "sensory organs"? That it cannot acquire such knowledge?

Empirical (or a posteriori) knowledge is usually contrasted with a priori knowledge, stuff whose truth is independent of the outside world, but is usually a function of the meaning of words (such as "All batchelors are unmarried" - these are analytic truths). Things that are supposed to be true independent of the outside world but not embedded in the meaning of the words are supposed to be "synthetic a priori" truths. But it seems to me that a priori truths come from the brain examining itself, that the only way such "knowledge" can be obtained, i.e. a state change occur, is by examining the physical process of reasoning itself, whether directly, or indirectly as a result of the "programming", i.e. the construction of the machine / brain's mechanism for reasoning.

These "a priori truths" are mediated by the "I/O of self-reflection", and are not actually a priori at all, in practice. The knowledge of the truths, i.e. the experiential sense of "dawning on oneself", i.e. what it feels like to experience a state change in one's knowledge representation, came about because of a physical process which may or may not have bugs; i.e. it is mediated.

So, a counterpart to "we have eyes, therefore we cannot see" - a lovely caricature - is "we have brains, therefore we cannot think". It seems to me no true Idealist can deny that he cannot have ideas.


Well, since it's late-night philosophy hour on HN...

"The whole denial of the possibility of empirical knowledge strand in modern philosophy has long rubbed me up the wrong way, but possibly only in a way that is clear to programmers."

I wouldn't personally go so far as to deny the possibility of empirical knowledge. But I would at least say that putting empiricism on a solid logical foundation is damned difficult, and maybe impossible.

"The denial usually derives from a distinction between the mind and the world outside it; things-in-themselves from the outside world can never be perceived by the mind, because all perceptions are mediated by sensory organs, they are all filtered one way or another."

Here you've hit a particular nail quite squarely on the head. In the English-speaking pragmatic tradition, there's been quite a lot of work devoted to getting away from that distinction and a few others. Mostly, these distinctions have been inherited all the way from Plato (with a few exceptions, like the analytic/synthetic divide that Quine so famously argued against), and the moment you accept them you also take on the nastiest briar patch in all of western philosophy.

So the modern (postmodern? Rorty had issues with that label) pragmatist simply says: you know what? If your theory of nuclear physics lets you build a working power plant, don't bother losing any sleep over whether it matches up with the way the world "really" is, because that's not a useful question to ask.

And there's quite a strong temptation to buy into that point of view. You don't have to muck around with the logical foundations of empiricism and all the clever traps Hume left behind him. You don't have to trudge through the metaphysics of propositions in the hopes of establishing truth as correspondence. And with far less work than you'd put in solving those sorts of tangles, you can even get a convenient system for judging competing theories and choosing between them. Curiously, it ends up looking a lot like Popper's attempt at a falsificationist basis for empiricism.

But it also comes under fire from practically all sides. The foundationalists don't like it, because it says everything they've been doing since Descartes was pointless. The relativists don't like it because it still perpetuates the notion that some theories are better than others. The metaphysicians don't like it because it (literally) throws them under the bus. And the average person probably doesn't care much for it because it doesn't match up with the "common-sense" view of the world most people adhere to in western societies, especially since most of what westerners consider "common sense" goes straight back into the tradition that starts with Plato.

Of course, there's nothing in this which says you can't still have a notion of "reality". It's just that asking whether something corresponds to "reality" doesn't seem so important anymore. Thinking of atoms as miniature solar systems, with the nucleus in the middle and the electrons grouped in orbits around it, almost certainly doesn't correspond to how they "really" work, for example. But thinking of atoms in that way does let you get a lot of useful chemistry done (it gets you the layout of the periodic table, and the reactive properties of the elements, and...). It won't help you build a nuclear reactor -- you need quantum mechanics for that -- but if you're not building a nuclear reactor, then why does it matter?


I actually think that you're confusing a little bit of modern philosophy with a lot of "postmodernism." You have to be very careful there. Stove and Russell and Wittgenstein are "modern" philosophers. They also share a commonality in being founders of the school of analytic philosophy - the school most philosophy departments around the world are now centered on.

Postmodernism, on the other hand, has a comparatively small following. I'd make the exception for some of Stove's most criticized colleagues, including Feyerabend. There's no denying that he and his followers were influential, but I would not argue that they hold the "dominant" idea in modern philosophy. Their view is complex, but they certainly don't deny empirical knowledge, but instead aim to criticize certain points of empiricism which scientists generally regard as "solid." However, as you just read, Stove was one of many who sharply criticized him for "abusing" logical expressions.

I don't want this to turn into a really long argument, but I think you did make an interesting point. Many who believe that a posteriori knowledge is impossible are in opposition to many of the early philosopher/mathematicians of the early-mid 20th century. Some of the most notable are Russell, Whitehead, and Wittgenstein (and I also want to briefly mention Gödel, who was not very active in philosophy, but whose mathematics help the field immeasurably.

As opposed to Feyerabend and especially the skeptics, I would state that they have it all wrong. It is not a posteriori knowledge which is impossible, it is a priori knowledge which is impossible (or tautological, to be more accurate). Many of the early modern philosophers (especially the empiricists, like my heroes John Locke and David Hume) supposed that much of our knowledge comes from the outside - that is from our experience. As we now know today, they were largely correct. We gather a huge amount of our personality and view from our experiences, with very little "innate" knowledge. One of the main hold-outs from that time was in mathematics. Many believed that mathematics was something a priori solid. If you read Descartes's Meditations or Hume's Enquiry, you will see a lot of mention of Euclid, specifically his theories of geometry. Time and time again the philosophers used the example of basic addition or the laws of geometry as arguments for a priori knowledge. That is, even if I don't know that that tree exists, I can at least know that 2+2=4. This would seem to be an argument for the skeptics and idealists (like Berkeley), but instead it is an indictment. For, even mathematics is not safe from a posteriori reasoning. Why does 2+2=4? Because we have defined it to be so. We have come up with an algebra, the countable numbers, and defined addition using it. However, does this hold some innate truth, an a priori truth about the Universe? I would argue that no, it doesn't. This a priori knowledge is tautological -- math comes up with the "right" answer because we defined what the right answer in our consistent system is. Except, we really didn't. Gödel has helped here by proving that no moderately complex system can be both complete and consistent. This is one nail in the coffin of a priori math, but it continues. Eventually we reach some of our most basic axioms -- Peano arithmetic. It would seem that these are truly untouchable. a != !a. Who can disagree with this? Well, if you look closely, you'll see an assumption here. Or, more importantly, a definition. We define !a. We define these expressions. These are a priori, and many believe that you can build a priori systems out of them. The problem is -- you can't. Russell and Whitehead soon saw this after Gödel's insight, but it's still a contentious issue.

Well, I tried to keep that as brief as possible, but as you can see, philosophy tends to drone on and on. This isn't really a detailed analysis, but think of it as a footnote of my views on the issue.


Godel did not show that "no moderately complex system can be both complete and consistent". What Godel showed was that a recursively enumerable set of axioms that is rich enough to contain the natural numbers could not be complete and consistent.

The second order Peano axioms for the natural numbers have only one model up to isomorphism. The second order axioms are not computable, that is not recursively enumerable. The first order Peano axioms are recursively enumerable but have infinitely non-isomorphic models to them.

What Russel and Whitehead tried to do was to remove humans from mathematical knowledge by finding a system of computable - that is, find a mechanical process for determining whether or not a proof was correct. Godel showed that this is not possible. This is the reason why Penrose and some others think that AI will never reach the level of human intelligence.

Not sure if this impacts your points. But it definitely is not the case that no moderately complex system can be both complete and consistent. In fact his completeness result demonstrates the incorrectness of your statement. Just take as your axiomatic system the collection of all true statements in whatever system you are working with. That's a complete axiomatic system. It's not helpful because there is no easy to use (think computable) criteria for finding out what statements are axioms and which ones aren't.


I am appropriately corrected. This actually doesn't change my view at all, because when I said "moderately complex," I assumed that the natural numbers where included in that. If you go back to read Descartes and Kant you'll see much of the same treatment - the addition operation as defined by our algebra of the set of the natural numbers is used many times as an example of a priori knowledge.

You are completely correct though - it was very late when I first commented on this and was tired and simply wrong. I'll try and be much more specific in my treatment of mathematics in the future, although I am not a mathematician. I just want to note that I never claimed that Gödel proved that Peano arithmetic was incomplete or inconsistent (although if I remember correctly, he could not prove that the whole of PA was consistent), but simply that the nature of Peano arithmetic does not imbue any a priori knowledge of the universe or our existence. This is supported by Gödel in the broad sense that we cannot generate a "universal theory" of mathematics. However, my main point is that mathematics is not truth, it is only a model of our definitions and observations -- a tool, if you will -- and an incomplete model at that.


I didn't think that your view would be changed. Quite honestly I'm not sure I understand the philosophy behind this. But the fact that there is only one model to the second order axioms of arithmetic (Peano's axioms with induction included) is a bit surprising to me.

We can't come up with a computable system for finding all mathematical truth but there appears to be a hardwired number system in the universe. It's not computable but it is unique. The natural numbers lead naturally (no pun intended) to the integers in a unique way. The integers uniquely lead to the rationals and the completion of the rationals is a unique object called the real numbers. The unique algebraic closure of the reals is the complex numbers. There is uniqueness at each step. This coupled with the utility of using mathematics to describe natural processes is...strange to me and some others.

I don't know what this has to do with your points because I didn't understand them. Not because you didn't write clearly but because I don't know enough philosophy. I'm a mathematician and know very little about philosophy.

Thanks for your input.


Perhaps I am too influenced in my opinions of philosophy by my girlfriend, an ardent adherent to Kantian (transcendental) idealism - and she's German, to boot. We've gotten into heated standoffs over it, so we've mutually agreed not to bring it up.

I agree that a priori knowledge is tautological and subject to refutation via Gödel; and as a self-described software engineer (as opposed to a computer scientist) I suspect I have an inherent bias towards valuing empiricism over idealism.

I don't subscribe too strongly to the "blank slate" idea, at least not insofar as it relates to the "nature vs. nurture" debate. I believe a lot more is embedded in the nature, in the inherited evolutionary makeup, than most people would like to admit. I think much "knowledge" (such as how to acquire language) is "baked in" at the physical level, in the genome and proteome; but of course, this knowledge is not a priori, it comes from the "experience" that the evolutionary line of mechanical interactions has shaped.


Yes, I was unclear. When I meant knowledge, I was using Hume's treatment of knowledge, which is predominantly intellect based. That is, instinct and human brain capacities etc, etc, are not part of "knowledge." The current nature vs nurture debate is important, but not what I was talking about.


"He awarded the prize to himself"

So he also earns the award for the worst competition ever.


if induction doesn't actually work we'll never know that for certain. what is the alternative to induction? stop trying? epistemic hand wringing over the fact that induction might fail us isn't useful.


Well, the "epistemic hand wringing" has a very serious point, which is that it spells big trouble for philosophy of science, which is (among other things) concerned with the "problem of demarcation". Put simply: how do you tell what is and isn't "science"?

Hume's formulation of the problem of induction actually pointed to two things: one, the "logical" problem of induction, was simply the standard critique of inductive generalization as an unsupportable method of inference. The other, the "psychological" problem of induction, claimed that inductive generalization was nonetheless how human beings actually think, and so we're screwed. But in the late nineteenth century, and then again in the mid-twentieth century, you get two thinkers who challenge this.

Charles Saunders Peirce took a view of science and of human thought which was not based on induction: in Peirce's view, the "average" person simply believes something until it causes some sort of conflict (at which point, Peirce claimed, other methods of justifying belief would be developed in response, leading to a chain which eventually ends up at the scientific method). Peirce also didn't view science as being able to give ultimately true answers to questions (thus sidestepping the need to justify inductive generalization, even if it does end up as part of scientific method); rather, science can get better and better approximations to the truth over time (as more observational data becomes available and new theories are proposed to explain the data), but will never actually arrive at "the truth" (and we wouldn't be able to tell even if it did). In other words, Peirce's view of human knowledge and of science is based around fallibility.

Karl Popper, immersed in the world of German-speaking philosophy, came to very similar conclusions much later on, and proposed a solution to the problem of induction in the following form. First, he accepted in its entirety the logical problem of induction, but declared that it need not cause problems for science, because science need not be inductive in nature. Second, he proposed that the psychological problem of induction was a fiction: he asserted that the way people actually reason is far closer to fallibilism (just like Peirce), and framed it in common-sense terms as a process of trial and error.

Popper built a theory of demarcation around flipping the problem of induction on its head: it is true, he happily conceded, that no number of observed instances is sufficient to establish a generalization to all instances (including those as-yet-unobserved, or unobservable). But this turns out not to be such a big deal, because all it takes is one observed counterexample to demonstrate that a theory is false. Thus we can still proceed scientifically, but instead of speaking of theories which are "verified" by observation, we speak of theories which survive attempts at falsification.

Popper came to the same sort of conclusion as Peirce regarding the "truth" of scientific theories: he felt that there was no useful distinction between, say, a "hypothesis" or "conjecture", and a "theory", because none of them can be said to be true -- the best that can be said is that they have not yet been proven false. And so he developed a system in which "science" consists of those theories which can be subjected to falsification: a theory is scientific only if there is some test which, if it gives a negative result, will be taken as showing that the theory is false.

He talked occasionally of this system as applying a form of Darwinian selection to theories: there is never a final "best" or "true" theory, but there is a selection process at work which eliminates false theories through observation of counterexamples. The theories which stay with us and form the basis of everyday working science, then, are not those which are "true" but are merely those which, so far, have survived that selection process. And in judging between competing theories, Popper preferred the theory which was boldest in terms of possible falsification: theories which make assertions that are easy to test for falsity, he claimed, tend also to be those which -- if they survive such tests -- provide the broadest and most useful basis for further scientific work.

Of course, both Peirce and Popper are terribly unfashionable in philosophy of science these days. Peirce is reviled for having the gall to claim that science advances toward truth over time even if it never arrives at truth (a position which every good postmodern Kuhnian disciple will dogmatically reject). And Popper is often viewed as a sort of semantic charlatan whose attempt to shift from verification to falsification was merely a critique (albeit a devastating one) of communism, Freudian psychology and logical positivism.


I've never been able to grasp how falsificationism is incompatible with or different from induction.


They're sort of inverses of each other; a better way to put it is "verificationism" vs. "falsificationism".

The key difference is that a verification model seeks to establish that a theory is true, while a falsification model seeks to establish that it is not. Verification models cannot achieve their goal. Falsification models can.

This means throwing out the idea that you will ever have a theory "proven" to be "true", but thanks to the problem of induction you weren't (in the general scientific-method sense) ever going to get that anyway. Instead, you have theories which have been proven false (since falsification gives you counterexamples to universally-quantified conjectures, which allow the valid deductive conclusion of falsity of those conjectures), and theories which have not yet been proven false.

Importantly, you never say that the latter group of theories are "true", "likely to be true", etc.; you only and always say either that they've not yet been shown false or, more commonly, that they have thus far survived attempts at falsification.

To a lot of people it does seem like meaningless semantics, but for people interested in the demarcation problem (which is anything but unimportant these days) it's quite significant because it offers a viable framework for a solution.


thanks for the clarification. I guess the problem arose because I never thought of induction as a method for finding the "truth" per se, but rather as a method of finding consistent correlations (with direct cause and effect being a special case of correlation where the correlation coefficient is 1).


The argument for why the article's author thinks "the worst argument in the world" is invalid comes from a book by Alan Sokal, a physicist with a real chip on his shoulder against Postmodern philosophy.

Sokal is perhaps best known for what has come to be known as the "Sokal Hoax": http://en.wikipedia.org/wiki/Sokal_hoax

I strongly recommend anyone interested in this article and in learning about the more recent incarnation of the analytic/continental feud in philosophy read the articles Sokal has collected on his hoax: http://www.physics.nyu.edu/faculty/sokal/index.html

Outside these articles, Sokal and his sympathizers rarely acknowledge that there could even be any reasonable response to their allegations of buffoonary and charlatanism. Unfortunately for Sokal and his sympathizers, this pose leads to the conclusion that they are either inadvertently or deliberately ignorant of much of philosophy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: