How is this a counter argument for anything? A more permissive license is not inherently a bad thing. Many C and C++ projects are also abandon or in so-so condition, why you uniquely call out Rust makes little sense. Either sudo-rs fills the void or it doesn't, but it is a counter point to this idea that open source projects have no path of evolution. Just because that path doesn't look like how you want it to doesn't mean it doesn't exist.
yeah, trust is real important. Wait, what's that. Stop the count? Don't count all the votes because it's taking too long? Where have I heard that before... What political, totally not fascist, group of people have supported a politician saying that before...
Because what it really means is "we directed the AI, it translated our ideas into Lean, the Lean tool then acted as an oracle determining if anything as incorrect, doing literally all the hard work of of correctness, and this process looped until the prompter gave up or Lean sent back an all clear"
How is it not trivially solved by a discussion section? Why is your solution better for someone else's work flow? Why do you feel like you get to impose your way of doing work on an open source project?
Why do you feel like it's ok to make up nonsense about imposing? How can I impose anything on that project? Why break the expected/established workflow of users if the explanation doesn't work? Why are you asking 3 questions without answering 1?
each project has its own workflow. no established workflow is broken. github traditionally imposed a different workflow because initially it didn't even have discussions.
Perceiving corpos as "simple profit seeking entities" is some of the most naive Milton Friedman crap. Corporations operate as an amalgamation of the desires of a group of powerful enough influencers, of which your rank and file investor is NOT making a meaningful contribution. Milton Friedman has done more harm to capitalism than Marx has done to socialism.
Friedman abstracted away feedback loops between corpos and the social/regulatory environment they operate in. We agree that that is beyond fucking useless.
I didn’t make that same naive assumption when describing corpos as simple profit seeking entities, you just misunderstood what I was saying.
Dafny has been around for a while and people do in fact use it. People also apply contract languages to C and all matter of other things, so really question boils down to "Why arent you doing what I expect of you?"
Yes.. yes.. sure, of course... You neglect this one little detail: theorem proving IS programming. So if an AI can be "better than a fields medalist" (a laughable claim akin to basically calling it AGI) then it will be better than all software engineers everywhere.
But see you neglect something important: it's the programmer that is establishing the rules of the game, and as Grothendieck taught us already, often just setting up the game is ALL of the work, and the proof is trivial.
What is harder, beating Lee Sedol at Go, or physically placing stones on a Go board? Which is closer to AGI?
Because AlphaGo can only do one.
AI could very well be better at formal theorem proving than fields medalists pretty soon. It will not have taste, ability to see the beauty in math, or pick problems and set directions for the field. But given a well specified problem, it can bruteforce search through lean tactics space at an extremely superhuman pace. What is lacks in intuition and brilliance, it will make up in being able to explore in parallel.
There is a quality/quantity tradeoff in search with a verifier. A superhuman artificial theorem prover can be generating much worse ideas on average than a top mathematician, and make up for it by trying many more of them.
It's Kasparov vs DeepBlue and Sedol vs AlphaGo all over.
It's also nowhere near AGI. Embodiment and the real world is super messy. See Moravec's paradox.
Practical programs deal with the outside world, they are underspecified, their utility depends on the changing whims of people. The formal specification of a math problem is self contained.
> But given a well specified problem, it can bruteforce search through lean tactics space at an extremely superhuman pace. What is lacks in intuition and brilliance, it will make up in being able to explore in parallel.
right, but because math is not well specified and formalized yet, it could be a problem, and that's where humans with intuition and more rigid reasoning still necessary.
Your analogy completely misses the point. What is harder? Designing a game that has relevant implications in physical reality, or playing a game already designed given an oracle that tells you when you make an incorrect move?
I'm not sure if you're just choosing intentionally obtuse verbiage or if you're actually saying something completely incoherent.
"Overall Complexity" is a meaningless term without you defining it. I suppose you mean that there is some lower bound limit to the amount of work relative to a model of computation, but this is a trivial statement, because the limit is always at least no work.
We have many problems where we don't know the least upper bound, so even an interesting formulation of your idea is not necessarily true: work need not be conserved at the least upper bound, because reaching the bound may not be possible and epsilon improvement might always be possible.
Finally, algorithmic complexity is the wrong analogy for reverse mathematics anyway.
To give an example, we consider that binary search requires less work than a linear search. But there are costs and usecase considerations involved. Insertion of new recod need to use binary search to keep the data sorted. Also if the number of lookups is far less than number of records, the overall cost is more than appending and linear search. That's what I mean by by moving the complexity around.
A problem scenario doesn't have absolute characteristics. It's relative to your way of looking at it, and your definition of a problem.
You are right, but this doesn't mean that the amount of work is conserved as your original message implies. The correct statement would be that "algorithmic complexity is just one aspect of actual practical complexity and an algorithm with better algorithmic complexity can end up performing worse in reality due to practical considerations of the data and the processor doing the computations".
reply