The Dialogical Roots of Deduction[0] is an excellent book that I was recommended by my math professor after making a comment about how mathematics are a form of persuasion, not a religious truth. It was refreshing to read something I could agree with so easily. That being said, I think a lot of philosophy of mathematics is not as insightful unless you also study mathematics. It is easy to misunderstand or try to apply theorems incorrectly.
Max Tegmark, Karl Popper and Roger Penrose are the three best known for promoting the Pythagorean-Platonic idea that mathematics precedes matter. Because that seriously freaks some people out—they can’t even deal with the idea. But, fairly basic, that triangles are transcendent and would exist in any civilization, in any galaxy? Matter has never produced a perfect sphere, but spheres are nevertheless truly real—- right?
There’s an entire chapter devoted to Plato in Stewart Shapiro’s Thinking About Mathematics, the first book the author recommends. I think it’s pretty reasonable to recommend people start with an accessible contemporary survey rather than diving directly into Plato’s dialogues or any other particular primary source. As far as I’m aware, Pythagoras left no written works and would thus be unlikely to appear on a reading list.
“…would thus be unlikely to appear on a reading list.”
Yes, sorry to be facetious about Pythagoras but those early ideas are still rather vibrant and they do get under people’s skin. I didn’t realize until recently what a huge effect Pythagoras had (or his transmitted ideas) on the scientific revolution, vis a vis Kepler, Galileo, Newton, Hooke, etc.
> I think it’s pretty reasonable to recommend people start with an accessible contemporary survey rather than diving directly into Plato’s dialogues or any other particular primary source.
That is basically what you should never do, in any topic, unless it's some esoteric topic and a prior top-level explanation would help.
I don't really understand this. Its very common when starting a new topic to start from a secondary survey and then dive into primary sources - often recommended by the secondary survey.
You are also missing a distinction between doing philosophy (what the post is advocating) vs doing history of philosophy, which secondary literature has different roles:
Doing philosophy:
Platonism about mathematics has had proponents and developments since Plato, its quite likely that that secondary survey provides links to more recent primary sources than Plato himself, which you would otherwise be ignorant of.
Doing history of Philosophy:
Diving stright into ancient primary sources without some supporting secondary literature could leave you confused and without some kind of historical context.
>its quite likely that that secondary survey provides links to more recent primary sources than Plato himself, which you would otherwise be ignorant of.
That is the only advantage of secondary sources. Telling you that guy X and Y wrote about Z. But if you want to learn about Z, go read what those guys actually wrote.
If you don't understand a particular concept, word, or whatever, then do a research in secondary material available.
>literature could leave you confused and without some kind of historical context.
Specially in recent (basically anything written in the last 50 years) secondary sources I heavily dispense with comments on "historical context" as they are full of bs, and are heavily politically biased, one way or another, not necessarily in fields like Mathematics, but in History or Philosophy, etc.
Reading a recent survey is the perfect thing to do if you want to get an overview of the field. There is far more primary material than one can consume in reasonable time.
I can only second this. Especially regarding the 1st Shapiro book: We had an advanced seminar that was essentially "read and discuss the essays in the book". We were about 6 people, including the (excellent) professor. It was very productive and we got a very good overview of the field back then, and I think everyone enjoyed that seminar very much.
To relax from the reading, and since most (all?) of us did already have some advanced math education beyond the average philosophy student (e.g. I did major in CS, others had maths major or minor), we did a "guided by the professor" proof for the incompleteness theorem - he always wanted to do that, but couldn't in his normal lectures. That was fun.
I'd suspect I wouldn't have had such fond memories if we everyone was to read the original sources, especially since that would have been nearly impossible in the given time frame. In most seminars that's usually accelerated by having one person read one source each and do a presentation and/or write an essay. I think reading a well written essay and discussing that in a small group is the superior approach.
Of course if you want to become a professional in the field this will not replace reading some/most/all of the original sources in the long term. It is then a matter of opinion if it's better to start reading the source with or without some broad-but-shallow knowledge of the field. I prefer with, since I never read such material to "just learn what X said on Y so I pass some exam", but because I like thinking about it and put it into perspective [e.g. temporal, but also alternate ways of looking at things]; but then I'm not a professional in that field, so insert shrug-pony here.
It’s also a consequence of Christian theology, namely Christ’s role as the eternal Logos. And yes of course Christian philosophy owes a huge debt to pre-Christian philosophers, particularly Aristotle. Dualism vs monism is probably the fundamental philosophical disagreement.
Really though the continuing lack of respect for C.S. Peirce’s contributions to modern logic are the most galling to me. His semiotic is so full of promise.
Indeed! Abstractions deal with commonalities, and those do exist. ("There exists something in common between the three horses and the three apples you want to treat the horses with.")
the notion of commonality exists in the head of the observer because it's a useful fiction. (even 'the horse' itself is). But that is a subjective form of existence, what Jakob von Uexküll called one's "Umwelt" (the world as it presents itself to you). There's no reason to believe it precedes matter, or that triangles exist in a world without anyone to conceive of them.
edit: there's a great talk by Daniel Dennett on the topic that luckily still is up on archive.org
As another comment said. What do you mean by exist? It requires a little more rigour here. Using your example: What does it mean for three horses and three apples to exist?
The better question: What does it mean for three (of anything) to exist? Why not one thing, one thing, one thing?
The definition of existence you appear to be using is the physical proximity of those objects. But even that can get quite hairy. If there is two apples within inches of each other and another apple 100 feet away, are there three apples or two? The answer to this question depends on what you mean by the existence of three apples.
If an apple is cut in half, does the apple still exist? When the halves are brought together, at what distance between the two does the apple start existing?
> What does it mean for three horses and three apples to exist?
You tell us. You seem perfectly comfortable speaking of a multiplicity of instances.
(Personally, I would begin the discussion with substantial form. I can speak of many triangles that instantiate the same triangularity which allows me to assert the same properties of triangularity of them all. And yet triangularity is not to be identified with any particular triangle.)
This is a valuable point. Our thinking is too much tied to the language. Of course when we say that the abstract triangle exists we mean that what exists is the property of triangularity. Properties (of things) do exist, whether we like it or not. Just like the number 3, say, also exists - in the form of “threeness” property (of a collection of objects). There is not much more to be said here, except maybe that most, if not all, properties manifest themselves in, or indeed are, relations between things, and also that they are not arbitrary products of the mind, being impressed into us by the objective reality - and how can something be impressed into us that is not itself part of reality, i.e. does not exist in the most obvious sense of the word.
> they are not arbitrary products of the mind, being impressed into us by the objective reality - and how can something be impressed into us that is not itself part of reality, i.e. does not exist in the most obvious sense of the word.
They are not arbitrary products of the mind, for the reasons you explain; but they are products of the mind.
The property of NP-completeness didn't exist until we invented algorithms and analysed their time complexity, though many problems do posses that property; and in the same way the property of triangularity didn't exist until Greeks started imagining geometry in terms of idealized regions of space, defined in terms of simple relations between points without size. (And then, the triangularity property changed a lot when people started questioning the parallel postulate and discovered non-euclidean geometry).
It is not necessary that an entity exists for it to form impressions in the mind, it is enough that the mind can imagine it from actual existing perceptual elements, and our imagination fills in the details. Is it truly needed for a monster to exist under the bed for it to impress our juvenile minds?
> The property of NP-completeness didn't exist until we invented algorithms and analysed their time complexity [...]
Now that is a bolt statement to make with such certainty :)
Imagine it's 1822 and you're a salesmen, travelling from town to town to sell your wares. Of course you want to save on time and distance travelled, so you'd like to pick the shortes route that covers all towns on your list. Now how complicated can that be?
I'd dare to say the probem you're facing was already NP-complete back then, you just didn't know if you're too stupid (no, you're not!) or if it was in fact impossible without trying every possible route (yes).
In other words: When inventing a new predicate (e.g. is_np_complete) in non-temporal, binary logic, that predicate is always true or false (or undecided :)) for a given input. You essentially say that the predicate only exists once it has been created, but I say it has merely been given a name to reference it. I'd like to present a logical argument why I am right beyond a doubt, but this might be undecidable; at least I'm stuck thinking about it, much like the imaginative you from 1822 (my problem is that I don't know if it's possible to enumerate all possible predicates [that map each possible input to every possible output {true, false, undecidable}] - if it is I think I can make a sound argument). And very much like that fictive person should probably just start travelling on a good enough route, I should also get back to work ;)
Maybe you can come up with a formal proof why a predicate can only exist once it has been formalized for the first time?
//edit: some clarification to show that this is a friendly discussion :)
> some clarification to show that this is a friendly discussion :)
Thanks, people assume the worst intentions because of the lack of tone in written messages. :-)
> You essentially say that the predicate only exists once it has been created, but I say it has merely been given a name to reference it.
I don't see how this distinction makes much of a difference. Does a predicate exist if no human is thinking about it?
Before anyone defined the problem for the first time, the problem didn't exist and therefore it couldn't have properties. Unless I was salesperson with a singularly mathematical mind, I would not try to solve the problem for every possible case but just for the particular set of cities that I traveled through. And if I happened to be a rural Chinese, [1] ;-) I may well be able to solve my problem in polynomial time. NP-completeness is a property of a family of problem instances, so it matters only when you are studying the whole family, even though it may affect any particular instance (or not).
[1] i.e., there may be subsets of the general problem that may be solved in polynomial time, and my particular instance may belong to that subset. See Rural postman problem and Chinese postman problem
However, let's assume that we have defined the problem in its general terms. So, does the property of being a hard problem exist as soon as you state it, as an inescapable consequence of its formal definition, even before someone starts to study its complexity? I'd say absolutely yes, and if that's what you mean saying that the property 'exists', then we are on the same page.
> Maybe you can come up with a formal proof why a predicate can only exist once it has been formalized for the first time?
Yest, I think I could do that if I tried. I also think that I could do the opposite if I tried, showing that any predicate exists from the start of time, just waiting to be discovered.
You see, the problem with formalism is that the theorems that can be proven depend completely on the assumptions you incorporate when defining a specific formal system; therefore, I can orient the reasoning towards one conclusion or the other as I am interested, as long as it does not incorporate a set of axioms and rules of inference that produce an internal contradiction.
Formal systems are most valuable because they allow us to get rid of inconsistent assumptions in our reasoning, not necessarily because those statements correspond exactly one-to-one to entities in the real world.
> Well, a fighter jet, too, is a product of the mind.
I'd say there's a difference between having a physical, tangible fighter jet in front of you, with all the connections between its molecules in a configuration that allows it to fly, and having "the property of being a fighter jet" in front of you.
So much of it comes down to what exactly you mean by exist. Which ultimately ends up being a boring disagreement. People can have bigger or smaller definitions of what it means and then have passionate disagreements with each other about what fits inside which are ultimately about nothing but how big a person prefers their definition.
Sure, there are many ways in which something may not exist; but the existence usually demonstrates itself in a pretty straightforward way, like with the horse who bites you if you show her that you have no more apples left.
Look up intuitionistic logic and constructivism to see why existence can have a weaker or stronger meaning depending on the foundations of mathematics.
Then you're painting yourself as one who has a certain definition of exist and can't imagine other definitions who would participate in such discussions unwittingly about differences in definition rather than the subject matter.
> but the existence usually demonstrates itself in a pretty straightforward way, like with the horse who bites you if you show her that you have no more apples left.
I'm not sure what you are trying to say, but existence has many liminal cases. There's a rich tradition of ontologists trying to get at a good definition of existence.
good comment. This seems to come up more in number theory than in foundations/philosophy of mathematics, but I agree is has an important place. Not just triangles, but natural numbers having a fundamental place in reality ( e.g. integral numbers of dimensions, degrees of equations at the foundations of physics, etc., etc. Daniel Shanks has a list of about 60 of these "arguments" for Pythagorean interpretation of numbers )
What does being Platonist mean in this case; I wonder. The definition I've been given once is just that mathematical objects exist, but philosophers take it to mean that one believes in Plato's Theory of Ideas. The Theory of Ideas is a major development by Plato, but it has fatal flaws as set out in Plato's Parmenides. If so, I wonder how these Platonists have chosen to resolve it.
> philosophers take it to mean that one believes in Plato's Theory of Ideas.
This is not accurate. Platonism, in a contemporary context, is the view that abstract objects exist. And so in particular to be platonist about mathematics is to believe in the existence of mathematical entities.
The Theory of Ideas (aka Theory of Forms) is a specific conception of the true nature of the universe. Wikipedia seems to have a quite adequate introduction to it https://en.wikipedia.org/wiki/Theory_of_forms
* Where Mathematics Comes From: How the Embodied Mind Brings Mathematics Into Being by George Lakoff and Rafael Nunez
* The Mathematician's Mind by Jacques Hadamard
* Mathematics Form and Function by Saunders MacLane
* On the Brink of Paradox: Highlights from the Intersection of Philosophy and Mathematics by Agustin Rayo
The preface and chapter 1 of A Book of Abstract Algebra by Charles C. Pinter has an excellent discussion of a certain perspective of mathematics. Hermann Weyl wrote a lot about this stuff too.
Neophyte speaking - I love it, but so much in this area feels deeply established. Who are the rebels, the provacateurs, and what are their arguments? If I wish to 'know thine enemy', who would I read?
For example:
- who questions the foundational axioms of mathematical structuralism these days?
- who argues from a comparative analysis of logic and field theory?
> A funny story about someone who was an ultrafinitist:
> -Do you believe in 1?
> -Yes, he responded immediately
> -Do you believe in 2?
> -Yes, he responded after a brief pause
> -Do you believe in 3?
> -Yes, he responded after a slightly longer pause
> -Do you believe in 4?
> -Yes, after several seconds
> It soon become clear that he would take twice as long to answer the next question as the previous one. (I believe Alexander Esesin-Volpin was the person.)
It's not an answer to your question, but you may be interested in Alfred North Whitehead. He was initially trained as a mathematician and co-wrote some important works, but eventually shifted to religion and developed what is called "process philosophy."
> Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science, and finally to metaphysics. He developed a comprehensive metaphysical system which radically departed from most of Western philosophy. Whitehead argued that reality consists of processes rather than material objects, and that processes are best defined by their relations with other processes, thus rejecting the theory that reality is fundamentally constructed by bits of matter that exist independently of one another. Today Whitehead's philosophical works – particularly Process and Reality – are regarded as the foundational texts of process philosophy.
Alain Badiou also wrote some books on mathematics. He is definitely outside of the "official canon" in analytic philosophy of mathematics. His math-related work seems to have been controversial among more traditional mathematicians and philosophers:
Badiou is a user, not a philosopher of mathematics (he himself would agree). Being and Event —neither volume of the now trilogy— is certainly not about mathematics, nor philosophy of mathematics. Rather, self-avowedly, it is a work in the service of radical politics.
It makes me curious to see a similar reading list put together for computer science - the history and theory of computing, or the kinds of things you might generally study in a Comp Sci program (as opposed to practical skills/how to types of reading)
The stuff on that list all seems rather historical. I have no clue as to what is going on in the field in the current century, and the list doesn't seem to help much. This was better: https://www.andrew.cmu.edu/user/avigad/Papers/PhilMath.pdf
Working through all of Peter Smith’s suggested books from the original post should give one a solid understanding of much of what’s been going on in field up until fairly recently. Every recommended book in Smith’s list from the original post up to and including Shapiro’s The Oxford Handbook of Philosophy of Mathematics and Logic also appears in Avigad’s suggested readings from your link, several under the section on contemporary developments. I think it’s a plus that Smiths list is more focused. Smith’s list also contains more recent works that do not appear in Avigad’s list.
Well, let's say I want purely to know what has been happening this century, i.e. from 2001 til today, treating anything earlier as general background knowledge (formalism vs platonism, etc.). I saw a few works in Smith's list that were published later than 2000, but didn't have any indication that they weren't primarily surveys of older work. The Oxford Handbook is from 2005 and it sounds like a survey. I don't see a date on Avigad's article but it does discuss some stuff that was contemporary at the time it was written, and some of the MO posts are quite recent and topical.
I think everything starting with the below paragraph of Smith’s article tries to give a decent answer to your question.
As I said, I have surely provided more than enough introductory reading! Still, let’s ask: what has been published since around the time of the Handbook which is both of note and is also reasonably accessible? There was a short collection edited by Otávio Bueno and Øystein Linnebo called New Waves in the Philosophy of Mathematics (Palgrave, 2009), which has moderate interest. Some of the papers collected in Paolo Mancosu (ed.) The Philosophy of Mathematical Practice (OUP, 2008) are worth reading. And of course, the journal Philosophia Mathematica continues to publish many good articles. But what of books?
I’m not really up on the latest stuff myself, but like other fields, typically current research is published in journal articles and it takes time for developments to be synthesized into survey books. Then again, Smith seems to believe there’s been a lull in activity lately.
My sense, though, is that after a period in which the philosophy of mathematics really flourished, there has perhaps been something of a lull more recently. However let me finish by mentioning a stand-out recent achievement, a little more wide-ranging than just philosophy of mathematics (though a considerably bumpier ride than perhaps the authors intended, so only just squeezing into what started out as an introductory list!) — namely Tim Button & Sean Walsh, Philosophy and Model Theory (OUP, 2018).
The final version of Avigad’s article was published in The Edinburgh Companion to Twentieth-Century Philosophies in 2007. Based on the content I would place the version in your link to sometime in 2006.
Thanks. I'm pretty ignorant of the whole field beyond some old basic stuff. But there are contemporary issues that certainly must have drawn attention. E.g.:
* natural language, cognition, and math. There is a book by Ganesalingam about this based on his PhD thesis, that was online a while back but was later taken down.
* Philsophical aspects of complexity theory. Scott Aaronson and Avi Wigderson have both written about this.
* Nonhuman cognition, AI being the closest to available. But would sufficiently smart fish or birds develop mathematics different from human math, given that their sensory systems must be a lot different from ours?
* There is an article by Mirco Manucci that I thought was cool, but might be considered bogus by professionals, about ultrafinitistic model theory. I'd like to know if there is any other work in this area.
* I've looked at some stuff by J-Y Girard and can't tell what the heck it is about, but some of it has turned out to be important. Some more comprehensible treatments or criticism of his more recent works would help.
Eg, you see that algebra and geometry “reappear” in things like computing — and the different perspectives of difference equations (geometry) versus typed statements (algebra).
Or, in things like category theory giving a framework where we can “abstract” arguments from different (type) theories that are the same “shape” — which has connected a number of fields.
This is wonderful! Just what I need. I keep returning to this one YT series by Frederic Schuller, "Lectures on Geometrical Anatomy of Theoretical Physics"[1], where he goes through the mathematical foundations of theoretical physics from the very bottom. He starts with propositional and predicate logic, presents the ZF axioms of set theory, and then boom, you're off to the races!
Recently, I posted a comment in a questions thread on /r/math asking where I could get a more thorough treatment of the philosophy of mathematics, as these lectures can get a bit arm-wavey, but the responses I got were disappointingly dismissive. One guy said all this philosophy was "not interesting" and considered the foundational stuff to be "just a construction". I get that it may not be directly useful for current research to a mathematician trying to get published, but common!
It’s kind of disappointing that history, philosophy, and mathematics are not taught together. When you try to piece these things together yourself you start to realize how much context you miss out on when studying them in isolation.
It's not unusual for math books to include historical notes. For example, Éléments de mathématique by N.Bourbaki includes an entire companion volume, Éléments d'histoire des mathématiques.
When I was a math major in college and in some of the years later, I went through such questions and responses. If you like digging into the issues and work of G. Cantor, B. Russell, J. von Neumann, K. Godel, P. Cohen, etc., fine.
But here is a shorter response:
Q. Is the square root of 2 rational?
A. Rational means the ratio of two whole numbers.
So, suppose p and q are whole numbers and we have
(p/q)^2 = 2
then we have
p^2 = 2(q^2)
So that the number of factors of 2 on the left side is even while on the right side it is odd. But 2 is a prime number and we have a theorem the fundamental theorem of arithmetic, that for each positive whole number there is only way to factor that number as a product of primes. So, the square root of 2 is not rational. Done.
This argument illustrates the philosophy of mathematics. Here, never mentioned an "ism".
Opinions can vary, but my opinion from my background in pure/applied math is that this little argument about the square root of 2 is about all there is that is solid and needed about the philosophy of math. Opinions can vary!
The original proof I heard in high school just assume no common factor between p and q i.e. we can eliminate all factors with common factors for x being rational as m/n to p/q. I miss any argument involving any fundamental theorem. The issue is that it is a contradiction, as it is obvious p and q have 2 as their common factor. This is a proof by contradiction and if logic right, the assumption they are rational is wrong.
The historical original by p school (as they are Greek) involving geometry not algebra.
Somewhat hoist by his own petard there - it’s not at all obvious that ornithology has no use to birds… Not sure if underneath what he said he meant “philosophy of science is not at all useful to science” (which is doubtful) or the more restricted “pick a working scientist, they probably have little to gain by devoting time to philosophy of science” (maybe reasonable, if a little dogmatic to insist that is true in all cases).
If we took the comparison seriously though, it might actually turn out to be a reasonably insightful metaphor - eg. ornithologists don’t have the same primary concerns as the individual birds they study, a bird is not going to understand or get any profit from studying ornithology (and yet that it can still be a good bird) etc.
Read in a straightforward manner, Feynman simply didn’t understand the value of philosophy, though he did inevitably and unwittingly tread across philosophical territory.
In some sense, we all philosophize to one degree or another. We can either acknowledge that and learn to do it well, or we can do it poorly.
Or we can acknowledge that and aim at restraint, stopping ourselves whenever we catch ourselves doing it, recognising that it's a bug in our software. I've spent most of my life reading philosophy. I've been clean for several years and I'm glad (it's no small feat for me). I recognise now that it was nothing but a bug in my mind. To younger people, I'd advise to focus on the actual mathematics and science, because that's where true knowledge lies.
Solution to talking nonsense is not talking better nonsense, but recognising that one speaks nonsense and should better be silent.
This. I was like you 100%. Most of liberal arts it's bullshit. It's the facts, not the opinions. Liberal arts guys love to quote people based on opinions as if the were an absolute truth.
I suspect by "liberal arts" you meant humanities and social sciences. Curiously, the term "liberal arts" encompasses natural sciences and mathematics as well. I wouldn't go quite as far as to say they're bullshit. For one, I do appreciate some poetry and other fine arts. Where my sympathies end, is appropriating the hard-earned credibility of science by so called social sciences and humanities, which often, as you wrote, use quotes of prior authors as some proof of truth. Rule of thumb: if papers in a field use "[famous author] wrote X, therefore Y" as the standard of proof, instead of "we conducted those experiments with this methodology and here is what we found", then it's closer to scholasticism[1] than to science and should have no business using the word "science" anywhere. To be fair, in each field of such a social science there are usually people trying to move it away from the authority-driven philosophy model and nearer the science model of experiments, so I wouldn't necessarily say it's all bullshit. But each time I hear someone "studying psychology" say "Freud wrote this", "Lacan said that" and expand that as if psychology was some Freud&Lacan fanfic, then proceeding to make conclusions from that about the real world with the asserted credibility of someone talking about discovery of the electron, I roll my eyes.
[1]: Reading works of scholasticism was dreadful. "Aristotle this, Aristotle that, I'm so insecure I'm afraid to utter a sentence without supporting it by something Aristotle said" and not a single experiment in sight.
But it takes philosophy to make the sorts of claims you make, making them self-refuting and incoherent. Perhaps the problem was not philosophy, but how you practiced it or how those you engaged with practiced it.
(Reply to second response)
W.r.t. the liberal arts, we must distinguish between genuine liberal arts and some decadent ersatz or blatantly ideological counterfeit. I don't wish to spend time making distinctions here and now, but it is worth keeping that in mind.
However, I should also note that what "the sciences"[0] give us is not a straightforward matter, either, and what they give us, the actual character of scientific aims, methods, research and results is precisely the domain of the philosophy of science. (My view is that a strong impetus in empirical science is mastery of nature and technological production, not necessarily the truth. The story is a bit more complicated than that, but I'll leave it here.) Empirical science is very much informed by background assumptions and can make metaphysical insinuations, assumptions and insinuations that are not the proper domain of the empirical sciences, but philosophy. The notion that science is magically free from the usual human foibles like unwarranted appeals to authority or bullying or ideological insinuation or whatever is, of course, false. Add to that the confusion that some scientists show when they fail to distinguish between science and scientism, for instance. Holding philosophical views or presuppositions, at least tacitly, can hardly be avoided.
W.r.t. scholasticism, I'm sorry if that was your experience, or your interpretation of your experience, but I would call that a thoroughly unjust characterization of scholasticism. Citations and giving credit is a normal part of any published work, but scholasticism is certainly not characterized by obsequious deference to prior authors (say, Aristotle; indeed, within the broad scope of what is traditionally called "scholasticism", you have varying views of, e.g., Aristotle). Scholasticism is famous for its rigorous disputations and appropriately draws from developments elsewhere.
[0] Even what the broad view of what "science" is will vary by language. The English/Anglophone meaning of "science" is generally the more restricted "empirical science", while, say, the German "Wissenschaft" is more inclusive and broad and it would seem closer to the classical or traditional view of a science as a body of systematized knowledge.
"I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today - and even professional scientists - seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is - in my opinion - the mark of distinction between a mere artisan or specialist and a real seeker after truth."
― Albert Einstein, correspondence to Robert Thornton (1944)
Modern philosophy of mathematics is still in part shaped by debates starting well over a century ago, springing from the work of Frege and Russell, from Hilbert’s alternative response to the “crisis in foundations”, and from the impact of Gödel’s work on the logicist and Hibertian programmes.
I wonder what the author thinks of Van Plato, The Great Formal Machinery Works and other works on the history of the foundational mathematics.
One of the things that stands out in the book is that when notions of mathematical logic and foundations of arithmetic were being formulated by Frege and Grassman in the 19th century, neither the notation nor the concept of proof as a mechanical process existed and the process of creating theories about proof processes also involved laying down the concept of proof and creating tractable notations for it (Frege's original notation quickly becomes incomprehensible as expressions grow, for example). Principia Mathematica is notable for creating modern notation despite it's failure to be a complete foundation of mathematics.
This is great, this shows that philosophy of mathematics has a long history and continues to be an active area of analytical thought. (Judging by recent discussions, I believe that much of the HN community desperately needs some education in this area.)
That's pretty much all their comments. Direct quotes from something like Wikipedia, the article, or a paraphrased version of a statement from the article.
The most important idea in math (and philosophy of math) is the idea of fixed points (aka spectra, diagonalizations, embedding, invariants, braids). By fixed point I mean something like the "Lawvere's fixed point theorem". Fixed points give rise to “meaning” (it’s really that general).
I vouched this. I have no idea whether it's true, but I think it's worth discussing and relevant to the topic. My first real introduction to fixed points was in Dijkstra's weakest precondition semantics of imperative programs, where supplies the semantics of a while loop (via the Knaster-Tarski theorem). I wouldn't be surprised if there were indeed something deeper and more general there.
I would say the most important idea in math is logic. Of course, fix points do play an important role in logic as well. For example, Cantor's theorem means that it really makes no sense when studying a mathematical universe, to try to include all operators on that mathematical universe in the mathematical universe as well. Which is what all type theories are trying to do. Of course, it is impossible, so all that type theories are doing is cutting the mathematical universe into disjoint pieces.
0: https://www.cambridge.org/core/books/dialogical-roots-of-ded...