>They're not, in general, subsets of some larger universe of values
You have a Computer.
The Computer has N bytes of memory.
You want to declare a Boolean type variable.
You tell your compiler this by coding it in the language of your choice.
The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
>They're not, in general, subsets of some larger universe of values
But the thing is, that memory allocated for the Boolean variable is now constrained -- to be a subset of all of the previously permissible bit patterns for that memory.
It is constrained to be either 0 (representing False) or 1, (representing True).
If that Boolean value is 8 bits long (let's say) -- then the only two permissible possibilties that exist for it -- are either 00000000 (0 - False) or 00000001 (1 - True).
>They're not, in general, subsets of some larger universe of values
That type, the Boolean -- is very much a subset -- of a larger universe of values...
For a given Byte it can also be determined if it belongs to the Boolean Type
-- via a simple function (aka "series of steps", aka algorithm).
Basically, just compare that Byte's value to 0 or 1. If it's either a 0 or a 1, then it's a member -- and if it isn't one of these values then it isn't.
That small series of steps -- is a function.
A function which determines membership of given data -- an input (in this case, a byte) -- to a type (in this case, a Boolean type).
To validate or repudiate type membership (or lack thereof) of something more complex -- a more complex function/algorithm/series of steps -- may be needed -- but the point is, that's how it's done...
>They're not, in general, subsets of some larger universe of values
So they are all-inclusive sets of some larger universe of values?
?
If that's the case -- then why do neither computer programs nor mathematical constructs that use types -- use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
> The Computer has N bytes of memory. You want to declare a Boolean type variable. You tell your compiler this by coding it in the language of your choice. The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
You're confusing several things here.
- a term is not its runtime representation
- a type is not the set of all terms that inhabit it (though you can get away with pretending it is in simple cases)
- and more generally, operational semantics are not denotational semantics
> So they are all-inclusive sets of some larger universe of values?
No, they're simply not sets. Types are primitive notions in type theories, the same way that sets are the primitive notions of set theory. No reduction of one to the other is necessary or, typically, desirable.
And even if you try to build type theory on set-theoretic foundations - which is 100% the wrong choice if you want to apply it to computational problems down the line - you're still going to run into problems once things start getting recursive. Consider, for example, the type of "hyperfunctions" given by
`type Hyper a b = Hyper b a -> b`
This is too big to be a set, for the usual self-containment reasons. But it's a perfectly legitimate Haskell type, modulo syntax. I've used it in real code.
> use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
That's what a dynamically typed programming language is.
I think the problem is that the parent tries to look at quite advanced topics without understanding anything about the foundations.
I'm not sure repeating already stated facts will change much therefore.
He was given already quite good sources to learn more. Now it's on him to understand those things.
(Of course things would be simpler if he would asks questions instead of insisting on his misunderstandings.)
> > use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
LOL, the all mighty uni-type! :-D
But I see here more people that confuse mere runtime values with types…
I fear too much exposition to "dynamic" languages (or maybe also static languages with "type values") causes some severe damage to future understanding of PLT concepts and confuses people.
I think some PLT / functional programming needs to be thought early in school as part of the math education. This would likely prevent some of damage form being exposed to imperative programming and/or dynamic languages later on.
In computer programming languages, consider Integers and consider Floats (Floating point numbers, i.e., numbers with decimal points, i.e., Float, Double, Long Double, etc).
OK, so let's suppose we define an Integer variable (in whatever language)...
int MyInt = 100;
And now, let's suppose that we want to divide it (using non-integer division!) by 3...
MyInt = MyInt / 3;
Well, now we have a bit of a problem!
You see, even though the answer should be 33.33333 (repeating) -- MyInt can only hold an Integer value!
It can only hold 33!
Some languages will permit this operation -- and the result of the operation will be 33 -- which is the wrong answer.
Some languages will prohibit this operation.
But the point is, is that true division, not explicitly integer division -- is an operation.
Some operations/operators -- make sense to perform on data that is of a specific type -- and some do not!
It's perfectly OK to perform true divison on a floating point type (well, ignoring division by zero, which creates problems no matter what!) and put that value back into that floating point type -- but it doesn't make sense to perform true divison on an integer -- and put that value (now wrong!) back into the integer!
At least not without an explicit typecast -- which tells the compiler "I am OK performing this non-standard operation on this type -- I am OK with the side-effects..."
So that's one example.
Another example is adding an integer value -- to a string.
Another example is concatenating, or performing another string operation -- to an integer...
Basic understanding is this -- types prevent operations on data -- where it doesn't make sense to perform the operation on the given data type!
A type determines a subset -- of the set of all operations (which are basically functions!) possible that "make sense" to be applied to them!
So types are in fact subsets!
Subsets of possibilities, subsets of various amounts of bits and bytes, subsets of operations/functions which make sense to be permitted on those types!
You have a Computer.
The Computer has N bytes of memory.
You want to declare a Boolean type variable.
You tell your compiler this by coding it in the language of your choice.
The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
>They're not, in general, subsets of some larger universe of values
But the thing is, that memory allocated for the Boolean variable is now constrained -- to be a subset of all of the previously permissible bit patterns for that memory.
It is constrained to be either 0 (representing False) or 1, (representing True).
If that Boolean value is 8 bits long (let's say) -- then the only two permissible possibilties that exist for it -- are either 00000000 (0 - False) or 00000001 (1 - True).
>They're not, in general, subsets of some larger universe of values
That type, the Boolean -- is very much a subset -- of a larger universe of values...
For a given Byte it can also be determined if it belongs to the Boolean Type -- via a simple function (aka "series of steps", aka algorithm).
Basically, just compare that Byte's value to 0 or 1. If it's either a 0 or a 1, then it's a member -- and if it isn't one of these values then it isn't.
That small series of steps -- is a function.
A function which determines membership of given data -- an input (in this case, a byte) -- to a type (in this case, a Boolean type).
To validate or repudiate type membership (or lack thereof) of something more complex -- a more complex function/algorithm/series of steps -- may be needed -- but the point is, that's how it's done...
>They're not, in general, subsets of some larger universe of values
So they are all-inclusive sets of some larger universe of values?
?
If that's the case -- then why do neither computer programs nor mathematical constructs that use types -- use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
?