I suspect it’s mostly a naming convention. Wars are often labeled after the territory where the fighting occurs rather than the actors involved. That’s why we say “Ukraine war” or “Iraq war,” even though multiple states may be involved.
In this case, “Iran war” is a bit misleading because the conflict is largely a missile and proxy confrontation affecting several territories (Iran, Israel, and parts of the Gulf), not just one battlefield.
Personally, I find it clearer to name conflicts after the primary actors involved. For example:
Russia–Ukraine war
U.S. & Israel–Iran war
That makes the participants explicit instead of implicitly framing the war around a single country or location.
An essay exploring how scale and monetization gradually shifted the internet from open exploration to subscription lock-in, behavioral sorting, and narrative segmentation.
The argument is that these shifts were not coordinated enclosures, but structural responses to uncertainty and growth.
https://borisljevar.substack.com/p/once-upon-a-time-the-inte...
One angle I expected the post to touch on is the systemic effect.
If AI reduces the marginal cost of writing emails, summaries, updates, etc., the rational move for each individual is to produce more of them. But if everyone does that, the total communication volume increases — and now everyone has more inbound messages to process.
So even if AI makes me faster at drafting emails, it may simultaneously increase the number of emails I have to read and respond to. The bottleneck shifts from production to attention.
It feels similar to Jevons paradox: efficiency gains increase total consumption. Here the scarce resource isn’t writing time — it’s cognitive bandwidth.
I’m curious whether AI will actually reduce shallow work, or whether it will amplify it at the system level by lowering the cost of generating it.
Has anyone observed this dynamic already in their org?
I've written an essay exploring a second-order risk of AI model collapse. The technical safeguards proposed to prevent it—filtering out "AI-like" text—create a perverse cultural incentive: they risk systematically discarding clear, polished human thought while rewarding noisy, imperfect text as "authentic."
This isn't just a data problem. It becomes a gatekeeping problem, affecting academia, journalism, and publishing, and could ultimately feed degraded language back into future AI training. I trace the feedback loop from technical mechanics to cultural distortion.
In this case, “Iran war” is a bit misleading because the conflict is largely a missile and proxy confrontation affecting several territories (Iran, Israel, and parts of the Gulf), not just one battlefield.
Personally, I find it clearer to name conflicts after the primary actors involved. For example:
Russia–Ukraine war U.S. & Israel–Iran war
That makes the participants explicit instead of implicitly framing the war around a single country or location.