One thing I've learned by following a link from elsewhere in this thread is that while the total count of neurons in an animal's nervous system is not a good proxy for intelligence, the count of neurons in the forebrain is. By that measure, only the orca ranks higher than humans [1].
That doesn't mean language ability is a natural outcome of crossing a certain threshold of brain complexity; if anything it's more likely the other way around: this complexity being be driven by highly social behavior and communication.
But LLMs can also explain code, in fact they're fantastic at that. They can also be used to build anti-censorship, surveillance-avoidance and fact-checking tools. We are all empowered by them, it's just up to us to employ them so as to nudge society towards where we'd like it to go. Instead of giving up prematurely.
There are no cartoon villains in general, that's the point GP is making by using the word "cartoon". Let's use some common sense, it's not like Trump and Hegseth got together and sneaked in the school on the list of targets just because they liked the idea of children being killed. It's naive to suggest this is a possibility worth considering.
Yeah, going to have to go ahead and disagree with you there boss. The man Hegseth in all his 'no quarter' bravado is only affirming his own mother's claim that he is a piece of shit. respectfully of course, I would not put it past him to kill some kids for a political or terrorism reason (the parents).
This is very different from targeting civilians as a goal in itself, which is what it would have had to be if this was not just negligence, but intentional, as GP suggested. Parent correctly points out that there's both no political incentive for that, and that it's not realistic from a psychological point of view, given reasonable assumptions about human nature.
The claim I'm responding to is "I refuse to believe anyone in the decision chain would move forward if they believed kids were going to be killed." I agree it's unusual for anyone in the US military to drop a bomb primarily because they want to kill some children. I think it is not unusual for people involved in bombing campaigns to anticipate killing children and move forward anyway.
> This is very different from targeting civilians as a goal in itself
Targeting a single person which might be a valid target had war been declared, while also intentionally striking many civilians around them, is the same as targeting those civilians. You knew the bomb you dropped was going to kill them, and you pressed the button. It makes no difference who the primary "target" is.
Otherwise, countries would just bomb all the civilians and all their infrastructure and medical facilities and schools with the excuse that they heard from an unnamed source that there was a combatant nearby, like israel does in Palestine.
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
This 'self vs non-self' logic is very similar to how plants prevent self-pollination. They have a biological 'discrimination' system to recognize and reject their own genetic code.
> For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
Sure, you can do that. But then the syndicated content usually ends up looking like low-effort slop and doesn't get much traction. Each publishing platform has it's own features, limitations, and cultural norms. If you want to have any impact then you can't just copy content around: you have to tailor the message to the medium.
Probably some AI assistance was involved. Though you'd expect em dashes above, for example. A better example would be "No regression. No noise. Just compounding." It's not so much as to bother me, and I'm often annoyed by the ever-expanding tide of slop.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
Sure, but it's entirely possible this point lies way past the expiry date of the universe itself (if there is such a thing). Plus, I do believe in magic - the magic of Life, the Universe, and Everything. And "42" doesn't dispel it for me.
That doesn't mean language ability is a natural outcome of crossing a certain threshold of brain complexity; if anything it's more likely the other way around: this complexity being be driven by highly social behavior and communication.
1. https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n...
reply