Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the imagination here and imagination in general but the framing really stretches it... I think this whole article would be more substantive if it was a little more grounded in the concept of induction.

Well and if it would admit very openly that in the sentence "numbers are leaves" the word "are" is about the existence of an isomorphism between the natural numbers and a series of nested sets... but that it's far from the only isomorphism, induction is everywhere in math and computer science. I imagine some little kid reading this and clinging to a reductionist idea that "numbers are (only) leaves", but they aren't just leaves.

Anyhow the setup is really nice and inspiring, it just ended up feeling like a tease. Hope to see a followup!



Author here. Wasn't expecting to see this on the front page!

I'm really very far from a mathematician and this was a write up of a fun side project. I think the title would be unforgivably misleading in a formal context (if this was a paper claiming any new insights) but really it was a fun side project I wanted to right about. Maybe you read this and learned a little bit about set theory if you had no idea what it was (much like myself).

In general I resent popular science (especially in theoretical physics) which tries to reduce deep and interesting topics to poorly thought out analogies - but again my positioning here is not to educate per se. Or Michio Kaku style orating which assumes string theory a priori and later you have conversations with people who think string theory is established and tested because they watched a 40 minute video of him on YT.

Having said all this I need to get better and giving titles to the things I write - my other post about trying to build AGI in Rust got similar criticism.

Either way thanks for the feedback!


I'm under the impression that, at least theoretically, Von Neumann's principles of self-replication, game theory, or optimization in the context of designing neural network structures.

You could think about organizing a neural network with layers or nodes that are indexed by Von Neumann ordinals, where the structure of the network follows the natural progression of ordinals. For example:

Each layer or node in the neural network could correspond to a finite ordinal (such as 0, 1, 2, etc.) or transfinite ordinal (like ωω, ω+1ω+1, etc.). The way the network expands and evolves could follow the ordering and progression inherent in the Von Neumann ordinal system.

This could lead to an architecture where early layers (low ordinals) represent simpler, more basic computations (e.g., feature extraction or basic transformations). Later layers (higher ordinals) could correspond to more complex, abstract processing or deeper, more abstract representations.

But I'm afraid there is no hardware substrate upon which to build such a thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: