And, again, is one person going to develop those? A person with access to elastic rope might invent the slingshot, but I wouldn't expect them to invent the far superior sling: it's not obvious that the sling is better, since the learning curve is steeper. And a slingshot is not a particularly effective weapon: it's an inefficient bow that can't fire arrows.
You're still thinking in terms of "sighted society versus blind society", which is not what we are discussing. (Unless you're thinking "sighted and superintelligent", in which case I'd say sight is probably redundant.)
Ok. Just evading blind people would be absurdly easy if you can see. You could accurately throw rocks and run away from them all day. And being attacked from a distance would be terrifying to blind people.
Blind people are no less capable of throwing stones, and you only have the flight advantage if the ground is potentially-treacherous (e.g. unmanaged forest, scrubland) or you're that much faster. Any inhabited area will have been engineered to be safe for people to navigate – and it will not be lit well at night, where your reliance on vision will put you at a skill disadvantage.
The main advantage in an urban combat environment, I think, would be the ability to detect quiet people at a distance. Not needing to see makes it easier to hide yourself from visual inspection, but why would anyone develop this skill if nobody can see? Then, if the only person to practice with is the enemy you're trying to hide from… Also, you'd be able to dodge projectiles by watching the person throwing them, who might not telegraph their throws audibly, but would probably do so visually. This would let you defeat a single ranged opponent, possibly two – though I doubt your ability to dodge the rocks from three people at once for long enough to take one down.
But what do you gain from winning fights against small numbers of people? (I doubt very much you could win against a group of 30 or 40 opponents, with only sight as your advantage.) You would run out of food, shelter would be hard to come by, and every theft of resources would risk defeat: and one defeat against a society means it's over. Either you're killed, imprisoned, or they decide to do something else with you, presumably depending how much of a menace you've been. Your only options are to attempt a self-sufficient lifestyle (which you probably won't survive for long), to flee somewhere they haven't heard of your deeds, or to put yourself at the mercy of the justice system (and hope it isn't too retributive).
"Blind people are no less capable of throwing stones"
They sure suck at aiming.
But the best way to exploit the ability to see when everyone else is blind is to provide a service blind people can't. You could be a much better doctor and diagnose diseases based on site and perform surgery much better.
> This experiment was inspired by @swyx’s tweet about Ted Chiang’s short story “Understand” (1991). The story imagines a superintelligent AI’s inner experience—its reasoning, self-awareness, and evolution. After reading it and following the Hacker News discussion, ...
Umm...
I <3 love <3 Understand by Ted Chiang,
But the story is about super intelligent *humans*.
I'm glad the author spent some time thinking about this, clarifying his thoughts and writing it down, but I don't think he's written anything much worth reading yet.
He's mostly in very-confident-but-not-even-wrong kind of territory here.
One comment on his note:
> As an example, let’s say an LLM is correct 95% of the time (0.95) in predicting the “right” tokens to drive tools that power an “agent” to accomplish what you’ve asked of it. Each step the agent has to take therefore has a probability of being 95% correct. For a task that takes 2 steps, that’s a probability of 0.95^2 = 0.9025 (90.25%) that the agent will get the task right. For a task that takes 30 steps, we get 0.95^30 = 0.2146 (21.46%). Even if the LLMs were right 99% of the time, a 30-step task would only have a probability of about 74% of having been done correctly.
The main point that for sequential steps of tasks errors can accumulate and that this needs to be handled is valid and pertinent, but the model used to "calculate" this is quite wrong - steps don't fail probabilistically independently.
Given that actions can depend on outcomes of previous step actions and given that we only care about final outcomes and not intermediate failing steps, errors can be corrected. Thus even steps that "fail" can still lead to success.
(This is not a Bernoulli process.)
I think he's referencing some nice material and he's starting in a good direction with defining agency as goal directed behaviour, but otherwise his confidence far outstrips the firmness of his conceptual foundations or clarity of his deductions.
Part of the problem seems to be that he’s trying to derive a large portion of philosophy from first principles and low-n observations.
This stuff has been well-trodden by Dennett, Frankfurt, Davidson, and even Hume. I don’t see any engagement with the centuries (maybe millennia) of thought on this subject, so it’s difficult to determine whether he thinks he’s the first to notice these challenges or what new angle he’s bringing to the table.
> I don’t see any engagement with the centuries (maybe millennia) of thought on this subject
I used to be that person, but then someone pointed me to the Stanford Encyclopedia of Philosophy which was a real eye-opener.
Every set of arguments I read I thought "ya, exactly, that makes sense" and then I read the counters in the next few paragraphs "oh man, I hadn't thought of that, that's true also". Good stuff.
I've wanted something like this for a while to use with architectural PlantUML diagrams rendered to SVG with hyperlinks linking to their implementations.
Agreed, but a check of superficial details you've read and a detailed discussion grounded in the text can really help to cement the book in your mind.
I've experimented with sessions that start with prompts like:
> Hi, Please act as a tutor. I will act as a student. I’m working through the following text of X by Y - trying to engage with it more deeply. I'm specifically trying to let active recall clarify and consolidate my long term memory of it. I also want to make sure the ideas are connected in my memory with my existing concepts and concept maps. So please ask me questions from the text given. Present me with just the questions and then allow me to give answers. Discuss my answer - justify your responses from the text.
I almost always glean details in masterful metasurveys and monographs buried in the syntax that only emerge later. Learning is as much about the unconscious parallels. There are too many examples of this that LLMs would have zero access to, since it's an agrammatic kind of secret code that makes ideas come to life.
The idea we have any need to summarize based on the thinnest ideas of knowledge building: checklists, bullet-points, generic plain english ideation, says that learning is in collapse in favor of expediency that precedes info collapse.
One could design a toolchain that posts a hashed signed version of the source used to produce a signed binary.
Build and deploy what you want and if you want people to trust it and opt in then it is publicly available.
In this case you get the signature and it confirms the device and links to a tamper proof snapshot of the code used to build its firmware.
Some understandable short sentence or paragraph early on needs to answer the main question the title raises.
reply