Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The whole denial of the possibility of empirical knowledge strand in modern philosophy has long rubbed me up the wrong way, but possibly only in a way that is clear to programmers.

The denial usually derives from a distinction between the mind and the world outside it; things-in-themselves from the outside world can never be perceived by the mind, because all perceptions are mediated by sensory organs, they are all filtered one way or another.

But this seems suspect when one considers a simple physical computer as as an example of a simple mind. We model the "knowledge" of the machine as the state of its "memory", however we choose to represent that memory - flip-flop circuits or magnetized rust.

That "knowledge" changes as the machine's I/O manipulates the state through long chains of physical, mechanical operations, and looking in from the outside with our more sophisticated eyes we may see that the knowledge imparted by the "sensory I/O" may be more or less true. If it's less true (as a digitization, it'll almost always be an approximation), then the I/O or programming may have bugs; but if the I/O and programming are functioning well, is it true to say that the machine has not acquired true knowledge from its "sensory organs"? That it cannot acquire such knowledge?

Empirical (or a posteriori) knowledge is usually contrasted with a priori knowledge, stuff whose truth is independent of the outside world, but is usually a function of the meaning of words (such as "All batchelors are unmarried" - these are analytic truths). Things that are supposed to be true independent of the outside world but not embedded in the meaning of the words are supposed to be "synthetic a priori" truths. But it seems to me that a priori truths come from the brain examining itself, that the only way such "knowledge" can be obtained, i.e. a state change occur, is by examining the physical process of reasoning itself, whether directly, or indirectly as a result of the "programming", i.e. the construction of the machine / brain's mechanism for reasoning.

These "a priori truths" are mediated by the "I/O of self-reflection", and are not actually a priori at all, in practice. The knowledge of the truths, i.e. the experiential sense of "dawning on oneself", i.e. what it feels like to experience a state change in one's knowledge representation, came about because of a physical process which may or may not have bugs; i.e. it is mediated.

So, a counterpart to "we have eyes, therefore we cannot see" - a lovely caricature - is "we have brains, therefore we cannot think". It seems to me no true Idealist can deny that he cannot have ideas.



Well, since it's late-night philosophy hour on HN...

"The whole denial of the possibility of empirical knowledge strand in modern philosophy has long rubbed me up the wrong way, but possibly only in a way that is clear to programmers."

I wouldn't personally go so far as to deny the possibility of empirical knowledge. But I would at least say that putting empiricism on a solid logical foundation is damned difficult, and maybe impossible.

"The denial usually derives from a distinction between the mind and the world outside it; things-in-themselves from the outside world can never be perceived by the mind, because all perceptions are mediated by sensory organs, they are all filtered one way or another."

Here you've hit a particular nail quite squarely on the head. In the English-speaking pragmatic tradition, there's been quite a lot of work devoted to getting away from that distinction and a few others. Mostly, these distinctions have been inherited all the way from Plato (with a few exceptions, like the analytic/synthetic divide that Quine so famously argued against), and the moment you accept them you also take on the nastiest briar patch in all of western philosophy.

So the modern (postmodern? Rorty had issues with that label) pragmatist simply says: you know what? If your theory of nuclear physics lets you build a working power plant, don't bother losing any sleep over whether it matches up with the way the world "really" is, because that's not a useful question to ask.

And there's quite a strong temptation to buy into that point of view. You don't have to muck around with the logical foundations of empiricism and all the clever traps Hume left behind him. You don't have to trudge through the metaphysics of propositions in the hopes of establishing truth as correspondence. And with far less work than you'd put in solving those sorts of tangles, you can even get a convenient system for judging competing theories and choosing between them. Curiously, it ends up looking a lot like Popper's attempt at a falsificationist basis for empiricism.

But it also comes under fire from practically all sides. The foundationalists don't like it, because it says everything they've been doing since Descartes was pointless. The relativists don't like it because it still perpetuates the notion that some theories are better than others. The metaphysicians don't like it because it (literally) throws them under the bus. And the average person probably doesn't care much for it because it doesn't match up with the "common-sense" view of the world most people adhere to in western societies, especially since most of what westerners consider "common sense" goes straight back into the tradition that starts with Plato.

Of course, there's nothing in this which says you can't still have a notion of "reality". It's just that asking whether something corresponds to "reality" doesn't seem so important anymore. Thinking of atoms as miniature solar systems, with the nucleus in the middle and the electrons grouped in orbits around it, almost certainly doesn't correspond to how they "really" work, for example. But thinking of atoms in that way does let you get a lot of useful chemistry done (it gets you the layout of the periodic table, and the reactive properties of the elements, and...). It won't help you build a nuclear reactor -- you need quantum mechanics for that -- but if you're not building a nuclear reactor, then why does it matter?


I actually think that you're confusing a little bit of modern philosophy with a lot of "postmodernism." You have to be very careful there. Stove and Russell and Wittgenstein are "modern" philosophers. They also share a commonality in being founders of the school of analytic philosophy - the school most philosophy departments around the world are now centered on.

Postmodernism, on the other hand, has a comparatively small following. I'd make the exception for some of Stove's most criticized colleagues, including Feyerabend. There's no denying that he and his followers were influential, but I would not argue that they hold the "dominant" idea in modern philosophy. Their view is complex, but they certainly don't deny empirical knowledge, but instead aim to criticize certain points of empiricism which scientists generally regard as "solid." However, as you just read, Stove was one of many who sharply criticized him for "abusing" logical expressions.

I don't want this to turn into a really long argument, but I think you did make an interesting point. Many who believe that a posteriori knowledge is impossible are in opposition to many of the early philosopher/mathematicians of the early-mid 20th century. Some of the most notable are Russell, Whitehead, and Wittgenstein (and I also want to briefly mention Gödel, who was not very active in philosophy, but whose mathematics help the field immeasurably.

As opposed to Feyerabend and especially the skeptics, I would state that they have it all wrong. It is not a posteriori knowledge which is impossible, it is a priori knowledge which is impossible (or tautological, to be more accurate). Many of the early modern philosophers (especially the empiricists, like my heroes John Locke and David Hume) supposed that much of our knowledge comes from the outside - that is from our experience. As we now know today, they were largely correct. We gather a huge amount of our personality and view from our experiences, with very little "innate" knowledge. One of the main hold-outs from that time was in mathematics. Many believed that mathematics was something a priori solid. If you read Descartes's Meditations or Hume's Enquiry, you will see a lot of mention of Euclid, specifically his theories of geometry. Time and time again the philosophers used the example of basic addition or the laws of geometry as arguments for a priori knowledge. That is, even if I don't know that that tree exists, I can at least know that 2+2=4. This would seem to be an argument for the skeptics and idealists (like Berkeley), but instead it is an indictment. For, even mathematics is not safe from a posteriori reasoning. Why does 2+2=4? Because we have defined it to be so. We have come up with an algebra, the countable numbers, and defined addition using it. However, does this hold some innate truth, an a priori truth about the Universe? I would argue that no, it doesn't. This a priori knowledge is tautological -- math comes up with the "right" answer because we defined what the right answer in our consistent system is. Except, we really didn't. Gödel has helped here by proving that no moderately complex system can be both complete and consistent. This is one nail in the coffin of a priori math, but it continues. Eventually we reach some of our most basic axioms -- Peano arithmetic. It would seem that these are truly untouchable. a != !a. Who can disagree with this? Well, if you look closely, you'll see an assumption here. Or, more importantly, a definition. We define !a. We define these expressions. These are a priori, and many believe that you can build a priori systems out of them. The problem is -- you can't. Russell and Whitehead soon saw this after Gödel's insight, but it's still a contentious issue.

Well, I tried to keep that as brief as possible, but as you can see, philosophy tends to drone on and on. This isn't really a detailed analysis, but think of it as a footnote of my views on the issue.


Godel did not show that "no moderately complex system can be both complete and consistent". What Godel showed was that a recursively enumerable set of axioms that is rich enough to contain the natural numbers could not be complete and consistent.

The second order Peano axioms for the natural numbers have only one model up to isomorphism. The second order axioms are not computable, that is not recursively enumerable. The first order Peano axioms are recursively enumerable but have infinitely non-isomorphic models to them.

What Russel and Whitehead tried to do was to remove humans from mathematical knowledge by finding a system of computable - that is, find a mechanical process for determining whether or not a proof was correct. Godel showed that this is not possible. This is the reason why Penrose and some others think that AI will never reach the level of human intelligence.

Not sure if this impacts your points. But it definitely is not the case that no moderately complex system can be both complete and consistent. In fact his completeness result demonstrates the incorrectness of your statement. Just take as your axiomatic system the collection of all true statements in whatever system you are working with. That's a complete axiomatic system. It's not helpful because there is no easy to use (think computable) criteria for finding out what statements are axioms and which ones aren't.


I am appropriately corrected. This actually doesn't change my view at all, because when I said "moderately complex," I assumed that the natural numbers where included in that. If you go back to read Descartes and Kant you'll see much of the same treatment - the addition operation as defined by our algebra of the set of the natural numbers is used many times as an example of a priori knowledge.

You are completely correct though - it was very late when I first commented on this and was tired and simply wrong. I'll try and be much more specific in my treatment of mathematics in the future, although I am not a mathematician. I just want to note that I never claimed that Gödel proved that Peano arithmetic was incomplete or inconsistent (although if I remember correctly, he could not prove that the whole of PA was consistent), but simply that the nature of Peano arithmetic does not imbue any a priori knowledge of the universe or our existence. This is supported by Gödel in the broad sense that we cannot generate a "universal theory" of mathematics. However, my main point is that mathematics is not truth, it is only a model of our definitions and observations -- a tool, if you will -- and an incomplete model at that.


I didn't think that your view would be changed. Quite honestly I'm not sure I understand the philosophy behind this. But the fact that there is only one model to the second order axioms of arithmetic (Peano's axioms with induction included) is a bit surprising to me.

We can't come up with a computable system for finding all mathematical truth but there appears to be a hardwired number system in the universe. It's not computable but it is unique. The natural numbers lead naturally (no pun intended) to the integers in a unique way. The integers uniquely lead to the rationals and the completion of the rationals is a unique object called the real numbers. The unique algebraic closure of the reals is the complex numbers. There is uniqueness at each step. This coupled with the utility of using mathematics to describe natural processes is...strange to me and some others.

I don't know what this has to do with your points because I didn't understand them. Not because you didn't write clearly but because I don't know enough philosophy. I'm a mathematician and know very little about philosophy.

Thanks for your input.


Perhaps I am too influenced in my opinions of philosophy by my girlfriend, an ardent adherent to Kantian (transcendental) idealism - and she's German, to boot. We've gotten into heated standoffs over it, so we've mutually agreed not to bring it up.

I agree that a priori knowledge is tautological and subject to refutation via Gödel; and as a self-described software engineer (as opposed to a computer scientist) I suspect I have an inherent bias towards valuing empiricism over idealism.

I don't subscribe too strongly to the "blank slate" idea, at least not insofar as it relates to the "nature vs. nurture" debate. I believe a lot more is embedded in the nature, in the inherited evolutionary makeup, than most people would like to admit. I think much "knowledge" (such as how to acquire language) is "baked in" at the physical level, in the genome and proteome; but of course, this knowledge is not a priori, it comes from the "experience" that the evolutionary line of mechanical interactions has shaped.


Yes, I was unclear. When I meant knowledge, I was using Hume's treatment of knowledge, which is predominantly intellect based. That is, instinct and human brain capacities etc, etc, are not part of "knowledge." The current nature vs nurture debate is important, but not what I was talking about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: