Hacker Newsnew | past | comments | ask | show | jobs | submit | jimwhite42's commentslogin

Here's an attempt to explaining the irrational part of programmers deciding which programming language to use. Part of it goes into more detail on ideas like C programmers insisting 'on the danger because they cannot stand the possibility of the danger being taken away from them'.

"The Pervert's Guide to Computer Programming Languages" https://www.youtube.com/watch?v=mZyvIHYn2zk

It's pretty out there, but I thought also interesting.


> IMO actual leaders are just people who think ahead so when other people don't know what to do they have some sort of answer. They may not be in positions of authority at all.

That's a really great pithy summary. It gels with one popular definition of software architecture: good software architecture anticipates which bits of a codebase/solution are likely to change more effectively than bad software architecture.


I think the Sorites paradox may point to something specific about these kinds of ideas, which is not shared with the meaning of things like evil or knowledge. There's a page on it on SEP: https://plato.stanford.edu/entries/sorites-paradox/


> Microsoft just announced 10 million dollar effort to make Rust their default systems language for their infrastructure

This seems like an incredibly low amount of money to achieve something like that.


Might be, yet it sends a strong signal into the industry, specially coming from from one of the companies behind C++.

Similar to how clang slowed down as Apple and Google decided to focus on their languages, and key contributors abandoned clang.


And you can apply it, when you do, it really works.


> It seems like it should be possible, in theory, but it takes too much work.

The definition of a valid mathematical proof that I've heard mathematicians use is if it convinces other mathematicians. I think there's integrity in depth in mathematical proofs for lots of reasons, tying back to an axiomatic basis is a lot of extra work for likely no benefit.

On formal mathematics, almost no mathematicians pay attention to this afaik. It's something that lives in philosophy departments, not mathematics.


> On formal mathematics, almost no mathematicians pay attention to this afaik. It's something that lives in philosophy departments, not mathematics.

In the old days there were philosophers. They sought truth through thought and wisdom. Some started to measure reality and compare thought against that measurement, but real philosophers paid little heed for it didn't matter.

Testing their theory became known as physics. It turned out the testing against reality can be used without the theory, which we call engineering. We still have philosophers and they still pay little heed to measurement.

There is a similar disaster inbound for mathematicians. Formal mathematics is tedious and mostly only shows things that you were pretty sure were true anyway, so why bother.

It turns out the physicists and engineers have unsportingly built machines which are really, really good at the tedious bookwork required by formal mathematics. That is going to go exactly the same way that introducing measurement to philosophy went for the philosophers.


> There is a similar disaster inbound for mathematicians. Formal mathematics is tedious and mostly only shows things that you were pretty sure were true anyway, so why bother.

> It turns out the physicists and engineers have unsportingly built machines which are really, really good at the tedious bookwork required by formal mathematics. That is going to go exactly the same way that introducing measurement to philosophy went for the philosophers.

Formalized mathematics is just a very small subset of what mathematicians work on: in my observation people who work on (computer) formalized proofs often rather sit in CS departments.


this may be changing. as we speak. terence tao has recently been working to formalize existing work in lean. and has already found a bug in one of his published proofs. i think there will always ish be a need for mathematical proofs in the contemporary sense, but at this point i don't think its too implausible to suggest a future where the mathematical mainstream moves towards some hybrid mathematical exposition of computationally verified artefact.

https://mathstodon.xyz/@tao/111287749336059662


I think it's likely that axiomatic approaches, and formal approaches will continue to produce interesting results and have some effect on regular mathematics.

But this is very different to suggesting that most regular mathematics will switch to using formal proofs. There's a big ergonomics gap at the moment.

An analogy could be to look how pure mathematicians look down on applied mathematicians' work, the applied mathematicians don't care, they just get on with their own standards. You need regular mathematicians to choose to switch over en masse, what will compel them to do that?


…success by Tao and Scholze in formalizing their work and gaining appreciable benefits such as correcting technical errors.

Which is happening right now — and the younger mathematicians who are supporting those efforts (and more broadly, things like Lean libraries) are gaining the experience while making the ergonomic changes.

That is, the person you’re replying to isn’t unaware of the historic problems: they’re pointing out that migration is starting now with early adopters like Tao and Scholze.


I think a little hesitance on the overall success of these sorts of projects is prudent, given the history of axiomatic and formal expections and reality.

But let's say they make huge amounts of progress - they might improve the ergonomics enough that the formal foundations of mathematics will be brought into mathematics departments as a standard, legitimate and popular subject. But I still can't see how this would affect most mathematics.


> But I still can't see how this would affect most mathematics.

It won't, in the same way that introducing physics didn't affect philosophers. They were still free to do philosophy. But most new innovation and innovators came up doing physics instead - and that's been where all the growth had been for centuries.

In the end ungrounded theory is better than nothing. But grounded theory beats out ungrounded theory hands down.


The canonical paper on this was written by De Millo, et al., but it has a rather ungrammatical and incomprehensible title that does it no favors. However, the al. include Lipton and Perlis, so listen up...

Social Processes and Proofs of Theorems and Programs

https://dl.acm.org/doi/pdf/10.1145/359104.359106 [pdf]


I think the most exciting work in mathematics today is in the formal foundations. However, I can also understand mathematicians who are thinking like this:

1. I only need normal congruence

2. I only need perfect information games

Under problems that are solvable using these two assumptions, there is little benefit in tying proofs back to an axiomatic basis. Once you drop one of these two assumptions, proofs get much harder and a solid foundation gets more important.


Aren't you mixing two distinct things here?


I think you're right to identify two distinct things in the air, but it's the other guy doing the mixing. What science is and the extent to which that is approximated in practice are indeed two things. Neither is inherently more real or true than the other, or necessarily more right, depending on the context.

But I take this particular context to be about Sagan's comments on distinguishing truth from "baloney" via science by expressing principles that best express the scientific ideal. I don't take that to be "corrected" by appealing to science as practiced, which is where the mixing up of the two comes in.


I think there's a subtlety which isn't implicit in what you say. There is the ideal of science - which I think there's a case to be made that it's not something that can be fixed, but constantly evolves - and the practice.

But I think there's also a separate distinction between good faith and successful enough attempts to do science, and people just gaming the system. The fact that there's plenty of the latter must be acknowledged (in fact, I think the people who discovered this current set of issues and made a big deal of it are from the same group of academics as the ones abusing this loophole, and also, it's likely to be a perennial problem), but the former also exists and is not the same as the abstract ideal you bring up.

I'm not sure I would characterize Sagan's system here as specifically best expressing the scientific ideal. I think it's a great list to think carefully about to help with clear thinking, but I'm not sure applying the label of 'science' to all these points is the right way to think about them, although I'm not sure it isn't either.


I've been wondering if falsifiability is misunderstood, based on what a few people have commented about it. The alternative version is only that if claims are made that a system has made specific predictions that proved to be true, then it should be falsifiable on that basis. This was Popper's criticism of Marxists claiming Marxism was scientific - they would constantly claim that Marxism can predict what already happened, but it's bogus to claim this in retrospect, they repeatedly claimed this for occurrances only after they had occurred. Is this the same as demanding every theory be falsifiable?

Sean Carroll talks pretty positively about string theory. I think he paints the picture that although the popular view is it's not falsifiable (or not yet), therefore it's somewhere between very suspicious and junk, but actual theoretical physicists are much more positive about it.


Popper did have the idea of combating communism's claim of being a science or scientific materialism. And that's a pretty noble goal in my mind. His heart was in the right place.

I don't think Popper was going for that soft falsifiability - but if you modify his theory to do that, say it has to make some predictions, and only require that it should be falsifiable on the basis of it's predictions, you let in a lot of stuff in that obvious isn't science.

To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.

But even if we give Popper the falsifiability angle and imagine there is some workable version of falsifiability, I still don't thik his theory works, it's just not a good reflection of day to day science and scientists.


> To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.

I think this is the point on this issue, right? It can't count as falsifiable later, or still not science, unless they describe what they mean by "socialism" specifically enough in advance - which I think is a pretty nebulous constraint? Not sure about it. Definitely, there's a lot of (reasonable IMO) theoretical controversy about what 'welfare capitalism' has to do with proper socialist ideas, even though it's usually given the label 'socialist'.

I've also been wondering things like what working scientists do, and what this thing is that we call science generally - non research related stuff like teaching, or using science to do better engineering, and what the connection is between the two.

Is there still a place for any variation of falsifiability?

Bonus cheeky request: do you have any recommendations on modern philosophy of science?


Fyerabend is the top dog right now I wouldn't recommend investing too much in him, he basically thinks there is no demarcation and Voodoo and Einstein are equally valid.

I accept Quine who I admit isn't much better, he thinks it's all about creating a coherent world view. I think he's onto something but he's missing truth which I think is a problem - we want to think science gives us truths about the world, or at least we want a way to get to truth.

There's another view that says science starts with a set of untested assumptions which I haven't gotten around to reading much about


I like some of what I think I understand about Feyerabend's ideas, but I didn't get on with his actual books. I expected something ethnographic-y or observation based, but only found incredibly abstract stuff.

I listened to a few interviews with Liam Kofi Bright, and his ideas about what truth is are pretty interesting. The perspective of 'giving us truths' or 'getting to the truth', is unsatisfying to me.

I think there has to be a core of some kind of 'predict what will happen when we do something, then see how close we got, tweak it in response to these observations, then repeat'.


I thought this was a good analysis of Patrick Brown's claims: https://www.youtube.com/watch?v=dXZUXQPqY3k


As far as I can tell, the issue is people misrepresenting what the review says, not that the review is faulty.

The review concludes that we have evidence that masks are effective, but the set of papers reviewed cannot be used to conclude we know the level of effectiveness enough to make accurate judgements on tradeoffs in many of the kinds of situations where people were asked to wear masks in the pandemic. So we don't know enough yet.

I read a bunch of the criticisms of this paper and criticism of the anti-maskers misrepresentation of what it said, and none of the ones I found suggested that there were missed papers which provide the precision the review didn't find.

The Guardian article is a mixed bag in my opinion. It tries to get around the idea that we really need more research, it confuses a lot of things.

(BTW, I am not an anti masker, I wore a mask, I find people who whine about wearing them to be a bit pathetic most of the time.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: