In addition to sibling commenter, niri exposes everything over an IPC accessible via its cli (or a socket), so if you wanted, say, a keybind to switch to your terminal, no matter where it is, you could bind it to this:
Nice! I have a jq/ipc based abomination of a run-or-raise equivalent that works spectacularly well, including cycling windows in order. I keep saying I'll stick it in a repo.
I don't think it's particularly useful to think of unit as an empty tuple specifically, that is just an arbitrary but convenient definition for it.
Really a unit type is just one that contains only a single value. This is a unit in the same way that 1 is a unit for the integers. With some hand waving it is an identity for product types, for example (int, ()) is the "same" (xxxmorphic yada yada) as int
Mildly disagree with your first statement. Well, I mostly agree that it's not particularly helpful for newcomers.
As a 0-tuple, it becomes a specific case of a more general concept -- there is some beauty/usefulness in not having to have a "special" construct for "Unit", which is (in a sense) not just "any" unit type.
It also "justifies" the syntax of `()` and notes that it is a product type, all the while fitting into the idea of the "cardinality" of `(a1, a2, ..., an)` being the product of the cardinalities of each of its type params.
That page is decidedly worth reading and re-reading many times in the future...
I think it boils down to the following:
>"Curry–Howard correspondence [...] is the direct relationship between computer programs and mathematical proofs."
And:
>"If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program."
In fact, I'm going to go for "full crackpot" here...
If all computer programs are algorithms, and all mathematical proofs are algorithms, and all types are algorithms -- then a "grand unifying theory" between Computer Programs and Mathematics -- looks like this:
It's all Algorithms.
Algorithms on top of algorithms.
You know, like turtles on top of turtles...
This makes sense too, if we think about it...
Algorithms are just series of steps, with given inputs and given outputs.
That is no different than Computer Programs.
And that is no different than Math...
You can call this series of steps an Algorithm, you can call it a Function, you can call it y=f(x), you can call it a Type, you can call it a Computer Program, you can call it a Math Equation, you can call it a logical proposition, and you can use whatever notation and nomenclature you like -- but in the end, in the end, it all boils down to an Algorithm...
A series of steps...
Now, perhaps that series of steps -- uses other series of steps -- perhaps that Algorithm uses other Algorithms, perhaps that function uses other functions, perhaps that Computer Program uses other computer programs, perhaps that Math Equation uses other math equations, etc., etc. --
But in the end, in the end...
It's all a series of rigorously defined steps...
It's all an Algorithm...
Or Algorithm consisting of other Algorithms... (recursively...)
Patterns of steps -- inside of patterns of other steps (again, recursively...)
Anyway, great page, definitely worth reading and re-reading!
Officially you're supposed to download the book and then edit the exercise solutions into it with Emacs (or VSCode, I think), as you can then run the exercises to see if they type-check (i.e., if they're correct!). However, there's also a not-necessarily-up-to-date interactive in-browser version:
I haven't used Idris, so I'd say it's quite possible that working through the Idris book is also just as fun and relevant for understanding the implications of the Curry-Howard correspondence.
They're not. Only constructive proofs have corresponding programs (namely, the program that actually carries out the relevant construction). You can embed nonconstructive proofs indirectly via double-negation translation (computational counterpart: continuation-passing style), but equivalence of classical formulas isn't preserved.
> all types are algorithms
Definitely not the case. Types correspond to propositions, not their proofs.
All mathematical proofs consist of a series of steps.
>all types are algorithms
All types can be expressed as a series of steps -- with a given input -- and an output of True or False following those steps.
True if the given input is a member of that Type.
False if a given input is not a member of that Type.
If the definition of an 'Algorithm' -- is 'a series of steps', then both mathematical proofs and types -- must be Algorithms...
If we have any debate -- then we are debating the semantics of "what constitutes a step" -- what rigorously defines it -- and what steps may be permitted when...
It may very well be that the Lambda Calculus is the best definition of what constitutes these steps -- but it may very well be that there is a different/better paradigm for looking at them (I don't know myself -- I am trying to determine this)...
Here, you might like the following video (graciously submitted by rbonvall!) for an overview of some of the different possible paradigms that these steps -- might be considered in:
> All types can be expressed as a series of steps -- with a given input -- and an output of True or False following those steps.
No, this is a fundamental misunderstanding of what types are. They're not, in general, subsets of some larger universe of values, and you can't have terms without types attached. (Of course there are "gradually typed" programming languages, but these are really languages with a top type a la `object`, subtyping, and generally a healthy dose of unsoundness).
>They're not, in general, subsets of some larger universe of values
You have a Computer.
The Computer has N bytes of memory.
You want to declare a Boolean type variable.
You tell your compiler this by coding it in the language of your choice.
The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
>They're not, in general, subsets of some larger universe of values
But the thing is, that memory allocated for the Boolean variable is now constrained -- to be a subset of all of the previously permissible bit patterns for that memory.
It is constrained to be either 0 (representing False) or 1, (representing True).
If that Boolean value is 8 bits long (let's say) -- then the only two permissible possibilties that exist for it -- are either 00000000 (0 - False) or 00000001 (1 - True).
>They're not, in general, subsets of some larger universe of values
That type, the Boolean -- is very much a subset -- of a larger universe of values...
For a given Byte it can also be determined if it belongs to the Boolean Type
-- via a simple function (aka "series of steps", aka algorithm).
Basically, just compare that Byte's value to 0 or 1. If it's either a 0 or a 1, then it's a member -- and if it isn't one of these values then it isn't.
That small series of steps -- is a function.
A function which determines membership of given data -- an input (in this case, a byte) -- to a type (in this case, a Boolean type).
To validate or repudiate type membership (or lack thereof) of something more complex -- a more complex function/algorithm/series of steps -- may be needed -- but the point is, that's how it's done...
>They're not, in general, subsets of some larger universe of values
So they are all-inclusive sets of some larger universe of values?
?
If that's the case -- then why do neither computer programs nor mathematical constructs that use types -- use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
> The Computer has N bytes of memory. You want to declare a Boolean type variable. You tell your compiler this by coding it in the language of your choice. The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
You're confusing several things here.
- a term is not its runtime representation
- a type is not the set of all terms that inhabit it (though you can get away with pretending it is in simple cases)
- and more generally, operational semantics are not denotational semantics
> So they are all-inclusive sets of some larger universe of values?
No, they're simply not sets. Types are primitive notions in type theories, the same way that sets are the primitive notions of set theory. No reduction of one to the other is necessary or, typically, desirable.
And even if you try to build type theory on set-theoretic foundations - which is 100% the wrong choice if you want to apply it to computational problems down the line - you're still going to run into problems once things start getting recursive. Consider, for example, the type of "hyperfunctions" given by
`type Hyper a b = Hyper b a -> b`
This is too big to be a set, for the usual self-containment reasons. But it's a perfectly legitimate Haskell type, modulo syntax. I've used it in real code.
> use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
That's what a dynamically typed programming language is.
I think the problem is that the parent tries to look at quite advanced topics without understanding anything about the foundations.
I'm not sure repeating already stated facts will change much therefore.
He was given already quite good sources to learn more. Now it's on him to understand those things.
(Of course things would be simpler if he would asks questions instead of insisting on his misunderstandings.)
> > use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
LOL, the all mighty uni-type! :-D
But I see here more people that confuse mere runtime values with types…
I fear too much exposition to "dynamic" languages (or maybe also static languages with "type values") causes some severe damage to future understanding of PLT concepts and confuses people.
I think some PLT / functional programming needs to be thought early in school as part of the math education. This would likely prevent some of damage form being exposed to imperative programming and/or dynamic languages later on.
In computer programming languages, consider Integers and consider Floats (Floating point numbers, i.e., numbers with decimal points, i.e., Float, Double, Long Double, etc).
OK, so let's suppose we define an Integer variable (in whatever language)...
int MyInt = 100;
And now, let's suppose that we want to divide it (using non-integer division!) by 3...
MyInt = MyInt / 3;
Well, now we have a bit of a problem!
You see, even though the answer should be 33.33333 (repeating) -- MyInt can only hold an Integer value!
It can only hold 33!
Some languages will permit this operation -- and the result of the operation will be 33 -- which is the wrong answer.
Some languages will prohibit this operation.
But the point is, is that true division, not explicitly integer division -- is an operation.
Some operations/operators -- make sense to perform on data that is of a specific type -- and some do not!
It's perfectly OK to perform true divison on a floating point type (well, ignoring division by zero, which creates problems no matter what!) and put that value back into that floating point type -- but it doesn't make sense to perform true divison on an integer -- and put that value (now wrong!) back into the integer!
At least not without an explicit typecast -- which tells the compiler "I am OK performing this non-standard operation on this type -- I am OK with the side-effects..."
So that's one example.
Another example is adding an integer value -- to a string.
Another example is concatenating, or performing another string operation -- to an integer...
Basic understanding is this -- types prevent operations on data -- where it doesn't make sense to perform the operation on the given data type!
A type determines a subset -- of the set of all operations (which are basically functions!) possible that "make sense" to be applied to them!
So types are in fact subsets!
Subsets of possibilities, subsets of various amounts of bits and bytes, subsets of operations/functions which make sense to be permitted on those types!
"Programs as proves" is only a thing in the context of mathematically pure languages.
Almost all programming languages aren't pure.
That's on the other hand's side why prove assistants are very unusable as programming languages; you can't do anything with them usually besides proving stuff. Running actually "useful" code is mostly not possible. Things like Haskell or Idris try to bridge both worlds, but this isn't straight forward. How to actually do anything in a pure programing language is still an open question. Monads are some kludge but not very practicable for most people…
So to summarize: "Normal" programs don't correspond to proves in any way!
"Evaluation corresponds exactly to simplification of proofs..."
(The rest of the video, both before and after this statement, contain more context...)
That being said, I agree with you that languages which are used primarily for theorem proving (AKA, "proof assistants") -- are usually not as applicable to as broad a range of programming paradigms and problems as most general purpose computer programming languages are...
You might wish to re-watch the linked talk as you just restated your confusion (which was already the cause of a lot of down-votes in this thread).
Wadler says there:
"Evaluation [of simply typed lambda calculus!] corresponds exactly to simplification of proofs…"
If you didn't get that context you actually missed the whole talk.
You really need to understand: "Programs as proves" is only a thing in languages with strongly normalizing type-systems. This implies that the language is pure and does not contain general recursion.
In a language with mutation (which is obviously not pure) you can destroy any "prove" by just writing to a variable, e.g. switching a single bit in the case of a boolean value.
Terms can be obviously only proves if it's not possible to change a term ("prove") form true to false (or the other way around) at will! Once some term is determined as having some type it may not change any more. This rules out obviously any language with mutation or I/O.
That's why you can't write applications in Agda, or mathematical proves in Haskell. (In the general case; you could do both with "tricks"; but those are indeed tricks).
>"Programs as proves" is only a thing in the context of mathematically pure languages.
>Almost all programming languages aren't pure.
Yes, but any Turing-Complete language -- is Turing-Complete...
Challenge: Show me a Mathematical Algorithm -- that can be expressed in Math, that cannot be expressed using symbols and symbol manipulation on a Turing Machine -- that is, on any plain, regular computer...
Hint: Every computer that Mathematica runs on -- is a Turing Machine...
Extra Hint: Mathematica can express, manipulate (and typically solve!) -- any expression in Mathematics...
Extra Extra Hint: Programming Languages need not be "pure" -- to be Turing-Complete.
Your question is trivial anyway: An algorithm is something that can be performed by a (Turing-complete) machine.
Therefore there exists no algorithm that can't be computed (on a Turing machine). That's by definition!
But this has absolutely nothing to do with which kinds of languages can be used to prove anything in math.
To prove something you need algorithms that are guarantied to produce results. Your machine must halt to spit out a result!
But as everybody knows there is no such guaranty for arbitrary algorithms. The question whether some arbitrary algorithm halts is undecidable.
Therefore Turing-complete languages are "too powerful" to be used to prove things. Because you can't know whether a "prove" in such language can be computed at all.
Or to formulate it differently: A computer can't compute uncomputable numbers, or decide undecidable problems.
But math can—of course—express uncomputable numbers. (I hope you're able to google some definitions of such number on your own…)
And just a reminder: This site is not the right place to learn basics.
Also your tone is getting unacceptable.
That said, please keep in mind that the internet does not forget… Your childish behavior will be remembered until the end of time. (In case you've forgot, you're posting here under your RL name, boy.)
>An algorithm is something that can be performed by a (Turing-complete) machine.
We agree so far...
>Therefore there exists no algorithm that can't be computed (on a Turing machine). That's by definition!
We agree so far...
>But this has absolutely nothing to do with which kinds of languages can be used to prove anything in math.
Here we disagree!
It has everything to do with which kinds of languages can prove anything in math!
If a language is Turing-complete, it can run any algorithm.
If a language can run any algorithm, it can be programmed to perform symbolic manipulations of Math equations that are expressed in symbolic form.
Basically, it can perform Mathematics.
This is what Mathematica is and does.
Mathematica could be programmed -- in any Turing-complete programming language.
If Mathematica could be programmed in any Turing-complete programming language, and Mathematica can be used to solve any Mathematical problem, then any Turing-complete programming language could be used to program what Mathematica does -- which is Mathematics, basically.
Which includes Mathematical proofs, incidentally.
>Your machine must halt to spit out a result!
This is a contradiction. Functions (and Programs) -- do not need to halt to spit out a result.
>The question whether some arbitrary algorithm halts is undecidable.
Algorithms (and programs and functions!) -- can be tested for halting by actually running them!
If they halt, they halt (99.9999% of them do not -- unless they are coded wrong!)...
>Or to formulate it differently: A computer can't compute uncomputable numbers
>or decide undecidable problems.
Yes -- but this reframes all mathematical proofs as being uncomputable and/or undecidable.
Challenge: Show me a mathematical proof which is either uncomputable and/or undecidable.
>But math can—of course—express uncomputable numbers. (I hope you're able to google some definitions of such number on your own…)
Computers can express uncomputable numbers (and any other concept in Mathematics) symbolically.
Computers can express proofs (and other operations in Mathematics) via symbol manipulation.
This is what Mathematica does.
Variables, after all, are symbols.
They can be symbols of things in the real world and/or they can be symbols of ideas...
But any symbol if defined in a Turing-machine, by whatever method -- can be symbolically manipulated in that Turing-machine.
Any language which is Turing-complete -- has the ability to manipulate symbols in this way -- if properly programmed -- like Mathematica does...
Conclusion: All Turing-complete programming languages -- have the ability to express Mathematical proofs, like Mathematica does, if properly programmed to do so...
>Also your tone is getting unacceptable.
To who, exactly?
?
Perhaps pure logic is interpreted as "tone" -- but the error of that particular type of interpretation -- is not on my side of the fence! <g>
>That said, please keep in mind that the internet does not forget… Your childish behavior will be remembered until the end of time.
I hope it does! <g>
The Internet will remember me (for a long time!) -- for my dedication to self-evident truths, first principles, logic, reason, clear thinking and simple explanations...<g>
The Internet, on the other hand, tends to forget people who endlessly confuse, distract, complain, propagandize, derail, make mountains out of molehills (and molehills out of mountains!), speak with "forked tongues" and engage in Selective Abstraction, Arbitrary Inference, Equivocation, Prevarication, Duplicity, Straw Man arguments, Dichotomous Reasoning -- or one/some/all of the above!
I'm not saying that that's you...
I'm just saying that the Internet tends to forget such people... <g>
You know, I guess it's their "right to be forgotten" -- for one or more such activities! <g>
>(In case you've forgot, you're posting here under your RL name, boy.)
<g>
Well, we know for a fact that I am neither:
a) A GPT-3 or other bot...
b) A paid disinformant and/or Troll...
c) Someone with such a large degree of narcissism and/or agenda -- that they feel the necessity to continuously railroad other posters to their point of view (remember, you engaged me in conversation first -- I did not engage you!)
d) One/Some/All of the above...
>And just a reminder: This site is not the right place to learn basics.
No site on the Internet is the right place to make illogical arguments to logical people...<g>
5-15% is in line with for example anecdotally observed view count vs votes on videos on YouTube.
So based on that I am not surprised that video games on Steam can be expected to have a similar number of reviews relative to number of owners.
Typically I expect a video on YouTube to have about 10% votes and about 1% comments.
Do all reviews on Steam need to include text or can simple thumbs up/down be given without any text? Haven't used Steam in a while so I don't remember.
yes, you have to write a text for the review to count. I just looked into the reviews for this game and there are actually 30 reviews, but apparently not every review counts towards the statline on the main page (not exactly sure which).