Yeah, it's a case of "don't let the perfect be the enemy of good". The conservative stance is happy with the status quo. The progressive stance isn't. We probably need a bit of both. Finding the right balance being key.
Maybe the difference between the eval of the best move vs the next one(s)? An "only move" situation would be more risky than when you have a choice between many good moves.
That's it exactly. Engines will often show you at least 3 lines each with their valuation, and you can check the difficulty often just from that delta from 1st to 2nd best move. With some practical chess experience you can also "feel" how natural or exoteric the best move is.
In the WCC match between Caruana and Carlsen, they were at one difficult endgame where Carlsen (the champion) moved and engines calculated it was a "blunder" because there was a theoretical checkmate in like 36(!) moves, but no commentator took it seriously as there was "no way" a human would be able to spot the chance and calculate it correctly under the clock.
Not necessarily. If that "only move" is obvious, then it's not really risky. Like if a queen trade is offered and the opponent accepts, then typically the "only move" that doesn't massively lose is to capture back. But that's extremely obvious, and doesn't represent a sharp or complex position.
The right book for the right problem. SICP isn't meant to teach you how to tackle fault-tolerance in a complex distributed system. Here is a textbook that talks about distributed systems (van Steen and Tannenbaum):
Not the OP but I have tried to reduce my sugar intake, I'm walking more than before, and I still basically gain half a pound every year. I'd lose some weight for a few days, and gain it all back on the one day I'm a bad boy. It seems like there's an internal dial that decides what my weight is supposed to be, no matter how much I fight it. And the dial adds half a pound every year. I guess the dial is my metabolism as I age.
"To ascribe beliefs, free will, intentions, consciousness, abilities, or
wants to a machine is legitimate when such an ascription expresses
the same information about the machine that it expresses about a
person. It is useful when the ascription helps us understand the
structure of the machine, its past or future behaviour, or how to repair
or improve it. It is perhaps never logically required even for humans,
but expressing reasonably briefly what is actually known about the
state of the machine in a particular situation may require mental
qualities or qualities isomorphic to them. Theories of belief, knowledge
and wanting can be constructed for machines in a simpler setting than
for humans, and later applied to humans. Ascription of mental qualities
is most straightforward for machines of known structure such as
thermostats and computer operating systems, but is most useful when
applied to entities whose structure is incompletely known.” (John McCarthy, 1979) https://www-formal.stanford.edu/jmc/ascribing.pdf
Ascribing mental qualities to machines poses several challenges. Ethically, it blurs the line between human and machine, raising questions about rights and responsibilities1. Philosophically, it complicates the understanding of mind by attributing human-like qualities to non-human entities23. Practically, it can lead to misunderstandings about the capabilities and limitations of machines, as they do not truly possess beliefs or intentions like humans do56. Additionally, this practice can result in a misuse of language, potentially misleading people about the nature of artificial intelligence
The problem is more little mini watch apps are adding more functionality / touchable area, to the point it's hard on my older 41mm (I think? Maybe 40?) watch to touch some of the touch areas.
I'm guessing most devs use the giant size watches.
Even Apple themselves are guilty of ridiculously small touch targets these days. Hold the side button until the emergency screen comes up. I defy you to hit the “power off” button on a 40mm watch on the first try. Hell, I have a hard enough time on my AW Ultra with these old eyes.
agreed : it's worse when your display has a hardware failure, you remember how to try turning it off, and you screw up and trigger an emergency instead, which i've done.
Even the biggest Apple Watch screen is still tiny by any reasonable metric.
They aren't filling the extra space with "distractions", they are giving you space for an extra widget or two and making it marginally less painful to type.
Python was created in 1991; I imagine the "yield" keyword appeared either right then or not much later!
Also, the refinement at the end of the article: "We arrange an extra function parameter, which is a pointer to a context structure; we declare all our local state, and our coroutine state variable, as elements of that structure." sounds like implementing a closure to me. You make the callee a lambda which would use an outside var/context/state to determine what to do or with what value. Am I understanding this correctly?
as lmm pointed out, python didn't have generators and yield until 2.2. icon, which tim peters adapted the idea from, had them quite a bit earlier than that, but i think it's reasonable to describe icon as not being a commonly used language, then or now
(python's generators are closer syntactically to icon's generators than they are semantically)
reply