Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read this in the past and there's a few things that bother me

* It's impossible to simulate a universe of our current resolution, because it would take more matter than the original universe.

* You can't just simulate 'observable areas'. Everything needs to be simulated.

* An infinite loop does not end, even in an infinitely powerful computer

* A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans, See:

* [1] Physical Limits of Computation: http://arxiv.org/pdf/quant-ph/9908043.pdf

* [2] https://blogs.oracle.com/dcb/entry/zfs_boils_the_ocean_consu...



"It's impossible to simulate a universe of our current resolution, because it would take more matter than the original universe."

The story is quite clear on this:

"a countable infinity of TEEO 9.9.1 ultra-medium-density selectably-foaming non-elasticised quantum waveform frequency rate range collapse selectors and the single tormented tau neutrino caught in the middle of it all"

That is, the entire simulation is being run on a single tormented tachyon.

As for this not being possible in the real world, well, sure. In the real world all evidence points towards there being limits on the computational capacity of the real universe. In their universe, by construction, they do in fact have access to countably infinite amounts of computation, at which point this is potentially possible. Physics in the story are obviously, by construction, not the same as the ones in the real world. This is, shall we say, a well-established literary move in the field of science fiction. Very, very, very... well established.


(Ugh... I remembered it as being a tachyon, so I typed that too. A tau neutrino is of course not a tachyon.)


Nothing about the actual computing mechanism introduced in the story is realistic, and that's on purpose. The point of the story is to investigate what would happen if such a thing were possible.


Isn't it basically a reductio argument for why it's not a possible scenario - there are no [discernible?] infinites in reality (or maybe there are no infinites because it's a simulation and simulating infinites is impossible ;0)> )


> An infinite loop does not end, even in an infinitely powerful computer

Sure it does! Run one processor instruction, then run the next one in half the time, then the next one in half as much time again...


On termination, is the instruction counter odd or even?


both, untill you observe it ;-)


Better ask the folks researching https://en.wikipedia.org/wiki/Hypercomputation



A closed timelike loop under quantum mechanics is essentially an automatic fixed-point solver, in the same way that a stable quantum state in space is a coherent superposition. So yes, the story is kind of off, but in spirit it's also kind of dead on.


> An infinite loop does not end, even in an infinitely powerful computer

If the infinitely powerful computer is an accelerating Turing machine[1], wouldn't it end in finite time?

[1] Copeland, B. J. (2002). Accelerating turing machines. Minds and Machines, 12(2), 281–300.


> You can't just simulate 'observable areas'. Everything needs to be simulated.

I can't make sense of this. Whatever is outside the causality sphere is by definition irrelevant.


Here's a basic way of understanding this.

Take a piece of paper. Focus a camera on that piece of paper.

Turn on an overhead light. Did the paper change?

Now, imagine this not only to be a visual representation as limited as this, but instead causality. Then the causality sphere is, theoretically, as big as the universe itself.


I don't understand your analogy at all, but it seems to me that an object's light cone at a given time is necessarily smaller than the universe (in volume of spacetime). Also, you could control the size of the cone by controlling the duration of the simulation.


Clarification is necessary I suppose.

The concept being discussed is the limited scope of simulation. However, it can be assumed that everything affects (or can affect) everything else. The only way to know whether or not one thing may affect anything else is to simulate it.

How could you know the sphere of causality without simulating everything?

An object's light cone is not a property of the object; it is the result of the properties of the object as they related to the light shining on that object. If the source of that light isn't represented in a simulation, then unless that object produces the light itself, the object must be dark or the simulation isn't accurate.

The discussion is about simulating a universe. Unless there are truly discrete systems within that universe, you can't do that simulation within a limited frame.

I think what the author was alluding to, though, was to say that the actual rendering of the view, rather than the calculation of the view, would be limited to the current frame of reference.


I think I see what you mean, but isn't it true that if there is light shining on the object, then the lamp must be in the light cone? It is literally impossible for something outside the light cone to have any physical impact on the object at all.

We do know the sphere of causality already, no computation needed: it is the sphere defined by the distance light has had time to travel. You don't need to simulate anything outside that zone to know that it is impossible for matter outside it to physically impact the center.

Think of it this way: if you want to simulate Earth through the year 2000, you know you don't need to simulate Alpha Centauri after 1996, since it is more than 4 light years away. You can know this without doing any computation at all.


I know this is an old thread, but I wanted to point out that you do need to simulate reality beyond an object's light cone so that you know what's there when the light cone expands to reach it.


I don't think that's right. Nothing outside Earth(2014)'s past light-cone can affect anything inside it, nor can anything outside Earth(2015)'s past light-cone affect anything inside it. Expanding the light-cone into the intervening year doesn't require you to simulate anything further out - well, you need to go outside Earth(2014)'s light-cone to simulate Earth(2015), but that's hardly worth mentioning.

I think you're picturing a sphere of 13Gly radius (or whatever) centred on present-day Earth, expanding at lightspeed to encompass new stars and galaxies. But while new matter enters our light-cone, it is not doing so as stars and galaxies. Those all have pasts within the light-cone - you don't need to go outside our past light-cone to find all the things that can affect them. Any matter that only entered our observable universe in the last year doesn't have a past, because it only just came through the Big Bang. It's primordial chaos, not fully-formed stars; you don't need to work out millenia of its past to know what's there. Unless for some reason the simulation needs to calculate pre-Big Bang conditions, which is possible, but then the definition of "light-cone" needs to be amended.


At what point do objects & events outside the causality sphere stop influencing things inside it?


Immediately, by definition. By causality sphere he means light cone [0]. If you want to simulate an entity, you only need to simulate the entity and the contents of its light cone. Anything outside the cone could not have a causal impact on the entity without FTL travel.

Of course, an entity's light cone is still likely to be quite large, especially if your simulated universe is old.

[0]: https://en.wikipedia.org/wiki/Light_cone


Except you have to have been simulating outside the light cone so that, once the light cone expands to include that space you know its state. So in reality, if you're interested in location V at time t, you have to simulate everything that will end up within the sphere centered at V at time t, not just whatever is in the sphere instantaneously.


Yes, which is why the light cone is actually described as a 4D object that exists in spacetime, not a 3D sphere.

Some back of the envelope calculation indicates that the spacetime "volume" of the light cone of an object is 1/8th of the spacetime volume of the universe up to that point. So using the light cone would net you an 8x speedup over a brute-force Universe render. Nothing to write home about if you have infinite computing power.


an entity's light cone is still likely to be quite large, especially if your simulated universe is old.

And the light cone (more precisely, the past light cone) keeps getting larger, so in fact the "amount of universe" that needs to be simulated increases without bound as time goes on.


Some back-of-the-envelope calculation leads me to believe that for a given location in space and time, its past light cone contains 1/8th of the spacetime between the beginning of the universe and the location.


I'm not sure how you're doing the calculation, but your answer is not correct.

Our current best-fit model of the universe has it being spatially infinite, which means the "volume of spacetime" between the Big Bang and "now" (more precisely, between the Big Bang and the "comoving" surface of simultaneity that passes through the Earth right now--now" is relative so you have to specify what simultaneity convention you're using) is infinite. Since the volume of any past light cone is finite, it will be effectively zero compared to the total volume of spacetime up to any surface of simultaneity. (But as you go to later and later surfaces of simultaneity, the volume of the past light cone of any event on the surface of simultaneity still increases without bound.)

Even in spatially finite models (which are not conclusively ruled out by the data we have, though they are unlikely to be correct), the fraction of spacetime between the Big Bang and a given "comoving" surface of simultaneity that is occupied by the past light cone of an event on that surface of simultaneity is not constant; it gets larger as you go to later surfaces of simultaneity.


> * You can't just simulate 'observable areas'. Everything needs to be simulated.

Is that an assertion? Do you have something to back up that claim? [I think if this is not backed up then your first point falls as well].

> * An infinite loop does not end, even in an infinitely powerful computer

What do infinite loops have to do with anything? Presumably the creators of the simulation would be skillful enough to avoid them.

> * A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans,

Once again, who said anything has to be 'fully populated'?


I think the assertion that you have to simulate the whole thing is somewhat self-obvious, and I'm not physicist, but I'll try.

Take the "window" of the Earth. Without simulating the rest of the solar system, there is no way to account for the gravity effects of the larger planets, the energy from the sun, the occasional impacts from asteroids, the high-profile fly-bys of comets like Halley's, etc. Without simulating the rest of the universe, what would the simulated-astronomers on your simulated-Earth be looking at when they peer into their telescopes? What happens when you fast-forward far enough into the "future" that the Milky Way merges with Andromeda?

You can't simply hand-wave these things away by saying, "well, we would ray-trace the things that are observable" because you have no way of knowing what is observable without simulating the whole rest of the universe too. It's either simulate the whole thing, or your simulation is very limited and inaccurate.


If you are mainly interested, in what happens with people, accuracy of the rest of simulation is not important, you only need to simulate the parts people see with the resolution they can see, you even can have lots of inconsistencies as long as they are not reproducible.


the problem is the world is chaotic: any approximation, no matter how small, will cause divergence very quickly. (This is actually the main problem with the simulation: how did they get the exact initial conditions and physical constants?).


I think because of the laws of causality you can't just simulate a tiny bit of the universe and expect to get accurate results because everything has a sphere of influence expanding at the speed of light.

For instance astronomers observe phenomenons that occurred in galaxies far far away. If in the simulated universe you just bound yourself to simulate, say, the solar system then such events wouldn't occur exactly in the same way. We wouldn't find the same bodies at the same position. That in turn would make the path of the simulation diverge significantly from our reality.

Think for instance what would happen if the constellations weren't the same in the sky. All astrology would be different. It might seem like a minor change but in the course of centuries that would probably amount to a big change.

That being said since the story postulates that the computer has infinite processing power and storage you can just leave out this bit and the story still makes sense, you just assume that it simulates the entire universe at all time.


This is merely a question of fidelity. One can imagine a simulator which produces low-fidelity everything and only increases fidelity gradually in isolated regions as the human observers peer at those regions. And, there is nothing to prevent results being continuously retroactively computed (just in time) as new 'discoveries' are made.


It is not known with certainty that very small actions across very large distances have any effect at all. It's possible that, just like you're literally seeing a past version of a galaxy, you're also literally seeing a less precise version of the galaxy.


But how can you decide what's going to have an influence and what will be without effect if you don't simulate everything?

If tomorrow we receive a transmission from a form of life that lived thousands of years ago thousands of light years away from us it's going to change a lot of things. You can't ignore that.

You can only simulate a small part of the universe if you can simulate correctly what happens at the boundaries of the region. Otherwise it "contaminates" the simulation inwards at the speed of light. if you want to simulate what will happen on earth for the next year you at least need to know the full state of the universe in a sphere one light year in radius, or at least that's my understanding.

I think these concepts somewhat overlap with the "Holographic principle" although I might be mistaken, I'm way out of my depth: https://en.wikipedia.org/wiki/Holographic_principle


That's why us being in a simulation is one solution to fermi's paradox.


Take a look at Hash Life, it changed my perspective on this specific point.


Hash Life still simulates everything, it just optimizes for similar patterns. While it does seem a reasonable optimization for simulating an entire universe (although why would you bother if you have unlimited processing power?) I don't think it's a good parallel to what the parent was suggesting.

In Hashlife you still need to have your entire universe in the hash tree, you can't take a bit of the pattern, only simulate for X generations without considering outside influence and expect to get the same results as a full simulation.


Right, so take it a bit further. You don't have to simulate everything that there isn't an observer on since you can create a temporal boundary between two areas.

So, translating that into physics:

1. You know you are only going to simulate for the next, say, day. 2. Take all the photons, xrays, etc. heading towards earth and continue to simulate them, but freeze everything outside of a reasonable distance (in hash life, this isn't arbitrary, it is perfectly defined). 3. Continue simulation for that sub area only.


> * An infinite loop does not end, even in an infinitely powerful computer

What do infinite loops have to do with anything? Presumably the creators of the simulation would be skillful enough to avoid them.

That would be this:

"""

But it was still pretty exciting stuff. Holy Zarquon, they said to one another, an infinitely powerful computer? It was like a thousand Christmases rolled into one. Program going to loop forever? You knew for a fact: this thing could execute an infinite loop in less than ten seconds. Brute force primality testing of every single integer in existence? Easy. Pi to the last digit? Piece of cake. Halting Problem? Sa-holved.

"""


That's a mathematical joke/commentary on the nature of infinity and how people think it's just a really really large number.

At least I hope that's what it is.


Not to mention that when such an accurate simulation becomes possible, special relativity gets violated - you can know whats happening anywhere before a signal can travel from there to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: