> A binary logic system can only answer “yes” or “no.”
Maybe I'm missing something, but this sounds like a silly argument for ternary. A ternary system seems like it would be decidedly harder to build a computer on top of. Control flow, bit masking, and a mountain of other useful things are all predicated on boolean logic. At best it would be a waste of an extra bit (or trit), and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.
But again, maybe I'm missing something. I'd be super interested to read about those soviet-era ternary systems the author mentioned.
I don't see anything fundamentally obvious about this (chip design and arch background). If you look at chip photographs, you see massive amounts of space dedicated to wiring compared to the "logic cells" area, if you include the routing in between the logic cell rows - if you want to look at it this way for compute vs interconnect. Nicely regular, full custom datapaths exist, but so do piles of standard cells. And massive amount of space dedicated to storage-like functions (registers, cache, prediction, renaming, whatever.) If you could 1) have logic cells that "do more" and are larger, 2) less wiring because denser usage of the same lines, 3) denser "memory" areas - well that would be a LOT! So, not saying it's an obvious win. It's not. But it's worth considering now and then. At this level the speed of conversion between binary and multi-level becomes critical also - but it's not so slow that it obviously can't fit the need.
Speaking of compute density, do people do multi-bit standard cells these days? In their standard cell libraries?
One thing we were trying way back then was standard cells that integrated one flip flop or latch and some logic function into one cell. To trade a slightly larger cell (and many more different cells) for less wiring in the routing channel.
> Control flow, bit masking, and a mountain of other useful things are all predicated on boolean logic. At best it would be a waste of an extra bit, and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.
There is an even bigger mountain of useful things predicated on ternary logic waiting to be discovered. "Tritmasks" would be able to do so much more than bitmasks we are used to as there would be one more state to assign a meaning to. I'm not sure if the implementation complexity is something we can ever overcome, but if we did I'm sure there would eventually be a Hacker's Delight type of book filled with useful algorithms that take advantage of ternary logic.
Yes, I am sure, that's why ternary logic is such a widely studied math field compared to boolean logic /s.
No really, can you give an example where ternary logic is actually considerably more useful than the log(3)/log(2) factor of information density?
I don't know of any concrete examples of "tritwise tricks" (this is not my area of expertise). But as ternary logic is a superset of boolean logic there are more possibilities available (3*3=9 different state transitions compared to 2*2=4) and some of them are bound to be useful. For example it should be possible to represent combinations of two bitwise operations as an equivalent tritwise operation (eg. x XOR A OR B where A and B are constants in binary could become x "XOROR" C in ternary) – but that feels like an example constructed by someone who is still thinking in binary. I'm certain that someone much smarter than me could come up with ternary-native data types and algorithms.
If ternary logic has not been widely studied I assume there is a lot to be discovered still.
Boolean logic is somewhat unintuitive already, I mean we have whole college courses about it.
> At best it would be a waste of an extra bit (or trit), and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.
This seems backwards to me. It isn’t a “waste” of a bit, because it doesn’t use bits, it is the addition of a third state. It isn’t ambiguous, it is just a new convention. If you look at it through the lens of binary computing it seems more confusing than if you start from scratch, I think.
Doesn't this have more to do with the fact that it's not part of the standard math curriculum taught at the high school level? I'm no math wiz and discrete math was basically a free A when I took it in college. The most difficult part for me was memorizing the Latin (modus ponens, modus tollens - both of which I still had to lookup because I forgot them beyond mp, mt).
Being a college course doesn't imply that it's hard, just that it's requisite knowledge that a student is not expected to have upon entering university.
I would think there wouldn't be much of a difference because the smallest unit you can really work with on modern computers is the byte. And whether you use 8 bits to encode a byte (with 256 possible values) or 5 trits (with 243 possible values), shouldn't really matter?
3 fewer lanes for the same computation. FWIW 8bits is the addressable unit. Computers work with 64bits today, they actually mask off computation to work with 8bits. A ternary computer equivalent would have 31trits (the difference is exponential - many more bits only adds a few trits). That means 31 conductors for the signal and 31 adders in the alu rather than 64. The whole cpu could be smaller with everything packed closer together enabling lower power and faster clock rates in general. Of course ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible. But the actual bits vs trits itself really does matter.
> A ternary computer equivalent would have 31trits
I think you mean 41, not 31 (3^31 is about a factor of 30000 away from 2^64).
The difference in the number of trits/bits is not exponential, it's linear with a factor of log(2)/log(3) (so about 0.63 trits for every bit, or conversely 1.58 bits for every trit).
> ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible
Yes -- everything else being equal, with 3 states you'd need double the voltage between the lowest and highest states when compared to 2 states.
Also, we spent the last 50 years or so optimizing building MOSFETs (and their derivatives) for 2 states. Adding a new constraint of having a separate stable state *between* two voltage levels is another ball game entirely, especially at GHz frequencies.
Maybe I'm missing something, but this sounds like a silly argument for ternary. A ternary system seems like it would be decidedly harder to build a computer on top of. Control flow, bit masking, and a mountain of other useful things are all predicated on boolean logic. At best it would be a waste of an extra bit (or trit), and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.
But again, maybe I'm missing something. I'd be super interested to read about those soviet-era ternary systems the author mentioned.