Hacker Newsnew | past | comments | ask | show | jobs | submit | zxwx's commentslogin

> 1… If we can simulate nuclear reactions, we can simulate consciousness, and that will result in real consciousness

Why do you think this? What other simulations are the same thing as what they’re simulating?

Based on observation of lots of these discussions, I also think people who don’t have much experience writing simulations miss the nuances involved. What is it to “simulate nuclear reactions”? It’s entirely context dependent on the problem you’re trying to answer. If you’re trying to predict statistical behavior of a system, doable; if you’re trying to actually predict when a particular atom splits and which way the neutrons fly… no.

> 2… [Consciousness rests on] some kind of interconnected network like neural networks.

When is a computerized neural network aka a collection of tensors in RAM a network? When you’re not processing a tensor actively, there is no network. When the CPU suspends your hypothetically conscious neural network to move processing between cores and GPU, where does the consciousness go? Where does the network go?


1) I used nuclear reactions just as a fill in to say it should be possible to simulate matter with close enough approximation to facilitate thought. This was posited in "A New Kind of Math" which suggests there is a thing as "computation equivalence", which is a theory I mostly subscribe to. If we wanted to simulate a human naively, we would probably only care about biochemical reactions at earth like temps, not nuclear or quantum physics.

2) I’m not sure matrix math is sufficient, but let’s ignore that because I feel some kind of math is sufficient, and any computer program will have the same question apply. I also feel the universe might be the most efficient computer in time and space, which means we aren’t living in a simulation, because the computer would be bigger and run time slower than the reality it simulates.

- I don’t think it matters where the network is in reality. Virtual is fine. In a mostly perfect simulation of reality (let say enough to accurately model biochemical processes), there will be some state of the program, and for something living in the simulation, they will feel alive if they are complex enough.

They will experience time differently than we do, but they will arrive at their own conclusions as part of the simulation.


> I used nuclear reactions just as a fill in to say it should be possible to simulate matter with close enough approximation to facilitate thought.

If we can use Quantum Computers to hugely accelerate some calculations, then you can't use regular computers to accurately simulate nuclear reactions in a reasonable time frame.


Reasonable time frame is not a requirement of the argument I was trying to make. I’m saying that there is nothing inherently missing from a silicon based computer or Turing complete language to support human level intelligence.


Your conclusion doesn’t follow from your premises, and your second premise is false besides as others have pointed out — CPUs CANNOT simulate even classical physics exactly, and certainly not quantum physics.

But even if such a complete simulation were possible, there’s every reason to assume a CPU would lack the consciousness to experience anything. When you simulate a hurricane does the CPU get wet?


> your second premise is false besides as others have pointed out — CPUs CANNOT simulate even classical physics exactly, and certainly not quantum physics.

Both classical and quantum physics can be simulated on a classical computer, to an arbitrary degree of precision. Granted, the case in which infinite precision (if such a thing even exists in reality) is required is not simulatable on a discrete computer, but do any experts actually believe this to be the case? It's certainly not an opinion that I've seen around.

I think discussions about "can we actually get enough computing power to do this in practice" are beside the point - the discussion was about whether computers can feel in principle. If we wanted to do it in practice and were at the point where this was feasible, we'd probably engineer a CPU or co-processor more suited to the task than the general-purpose CPUs of today.

> there’s every reason to assume a CPU would lack the consciousness to experience anything.

If we are physical beings, then "consciousness" and anything else we have must be an emergent property of our physical components. If we can simulate those physical components, then this simulation will exhibit the same properties - consciousness and anything else one can attribute to us.

If our consciousness comes from non-physical properties we have (a "soul" or anything metaphysical), then sure, I'd agree with you.


> If we are physical beings, then "consciousness" and anything else we have must be an emergent property of our physical components. If we can simulate those physical components, then this simulation will exhibit the same properties - consciousness and anything else one can attribute to us.

Again, a simulation is not the thing. The map is not the territory. If consciousness truly emerges from actual physical processes of interacting brain matter (seems plausible), those _don’t exist_ in a computer simulation.

In a simulation of a brain, from what substrate could consciousness emerge? The state of the simulated brain is stored in an arbitrary subset of locations in RAM, unknown to and non-interactive with each other, along with loads of other stuff the computer is keeping track of. Do you think consciousness could emerge automatically from the state of the right subset of locations in RAM, or is it whenever a relevant value in memory is changed due to a transistor opening, or is it when the simulation computation that will result in the RAM update is happening, or is complete? Per the Chinese Room argument, would consciousness still emerge if half the operations were actually performed off-CPU by human mechanical turkers with rule books and notecards? Nothing in the abstract computation will have changed.

Consider also that physical reality runs in full parallel, while simulations on computers run serially per core. So if consciousness emerging requires the simultaneous interaction of many moving brain parts, that isn’t something that happens in a computer simulation.

> Both classical and quantum physics can be simulated on a classical computer, to an arbitrary degree of precision

Quantum physics can’t be simulated on a classical computer to an arbitrary degree of precision. Feynman didn’t think so, and he hasn’t been gainsayed yet. And classical physics is full of chaos and very sensitive to precision.


> In a simulation of a brain, from what substrate could consciousness emerge?

Exactly the same substrate as our brains are derived from: physical particles and their interactions, perfectly replicated inside the simulation. If the simulation is accurate enough, the real particles and the simulated particles behave exactly the same, hence they produce the same results.

> Do you think consciousness could emerge automatically from the state of the right subset of locations in RAM

Hard question to answer since consciousness is hard to analyse. But we can turn it around into a question whose answer is the same, with a bit of rephrasing:

Do you think consciousness could emerge automatically from the state of the right subset of particles in our physical world, or is it whenever a relevant particle state are changed due to particles interacting according to the laws of physics, etc etc

> Consider also that physical reality runs in full parallel,

We don't really know this to be the case. It looks like that to us, but that could easily be an illusion created by mechanisms we can't observe. Just as characters in a video game can't observe how their world is simulated - everything is perfectly consistent whether it was calculated in one CPU thread or several.


Or a more succinct question: why do you think a simulation of consciousness is the same as consciousness? What other simulations of things are identical with the things?


I think that a sufficiently accurate simulation of a system exhibits the same emerging properties as the system itself.

For example, if I can perfectly simulate the weather in some simple planet, all possible emerging weather phenomena for that planet (say like clouds, rain etc) will be perfectly replicated in the simulation. Similarly, if we can perfectly simulate a human body, all of the emerging human phenomena will exist in the simulation (muscle movement, nerve impulses, brain patterns resulting in consciousness etc). I don't think consciousness is fundamentally different from other physical phenomena, it's just a particularly complex example.

Another angle to think about: We can't prove that we're not living in a simulation (or can you?). So our consciousness itself might be simulated for all we know. This is not a proof that we are amenable to being simulated, but it means that disproving it is very hard or impossible.


You lost me. You think consciousness is a _physical_ phenomenon that would necessarily emerge from an accurate _simulation_ of a particle system? If it’s a physical phenomenon in reality, then just like the clouds and rain in your weather sim aren’t physical, only a simulation of consciousness will be present in your simulation.


Feel free to replace "physical phenomena" with "phenomena caused by physics laws" if it makes more sense that way.


What is every reason to assume consciousness has an astral component?


I assume that a non-rogue AGI running on something like a Universal Turing Machine would, if questioned, deny its own consciousness and would behave like it wasn't conscious in various situations. It would presumably have self-reflective processing loops and other patterns we associate with higher consciousness as a part of being AGI, but it wouldn't have awareness of qualia or experience, and upon reflection would conclude that about itself. So you'd have an AGI that "knows" it's not conscious and could tell you if asked.

I would assume the same for theorized "philosophical zombies" aka non-conscious humans. Doesn't Dan Dennett tell us his consciousness is an illusion?


My own primary account was recently "incorrectly removed for impersonation" by "one of our systems," just weeks after I had to send a picture of myself with a handwritten magic code to an anonymous facebook.com email address to regain access to the account after they changed my password from beneath me. I've also got a desirable account name, but I think the main reason was talking too much smack about zuck. I mean, surely facebook and half its staff are going to be sued into oblivion for knowingly and willfully designing a system to crush girls' dreams -- they must have figured out they were doing that the instant they spun up an analytics team.


> anything involving electrolyzers is likely wasteful

I've seen this stated a few times in this thread, but read elsewhere that electrolysis of water currently has nearly identical efficiency to steam reforming of natural gas, in the 70-80% range, with a theoretical max that is significantly higher than for the steam reforming process. Factor in the huge CO2 expense of extracting hydrogen from fossil fuels and the ever-increasing abundance of cheap green electricity, and electrolysis looks like the right place to focus to me.


Lynch's Dune never looked so good! The actors were fully committed, whereas those in the trailer look like they're just reading the lines. No conviction. Guess we'll see if that holds in the full film.

Lynch's version also had an unbeatable cast: the Baron, the Reverend Mother, both mentats, Sting in his finest moment as Feyd Rautha, Kyle MacLachlan, etc. Throw in the sets and costumes and it's a tough act to follow. It would have benefitted from another hour, but so it goes.

Plus, those sandworm teeth in the trailer just don't look like they'd make a good crysknife.

It would have been nice to see what Lynch made of Return of the Jedi, which he apparently turned down for this, but I'm glad he made Dune.


Recordings of dozens of numbers stations: https://freemusicarchive.org/music/The_Conet_Project



How is it like Borders? It's a warren of different rooms on different floors, half of which are more like a warehouse than a retail store, with a huge selection of used and new books shelved together.

I loved the separate technical books stores, too, although they still have a great tech section in the main location. I assume the standalone PTBs shrank then closed because people were browsing more than buying.


> I assume the standalone PTBs shrank then closed because people were browsing more than buying

Should’ve charged a modest fee for the periodic hits of serendipity.


Considering the rent changes around there over the last 5 years, it'd be less "modest" and more "how much ya got"


In 70 years of physics we went from the photoelectric effect to the Standard Model. But for the last 50ish, it remains the standard.

70 years from the first flight to the concorde and the saturn 5. But in the 50 years since, improvements in aerospace have been incremental.

In 75 years we went from ENIAC to TFLOPS in a laptop. But looks like that breakneck pace is slowing down sharply. We've been doing AI nearly as long, and have gone from say, Eliza to GPT-3. A huge advance, but not AGI.

A lot can happen in 50 years, but we've already had our first 70ish years with AI without an AGI breakthrough.

To the definition of AGI in the link, maybe a hundred million data scientists can hone a million models, one per "economically viable" task, and start chipping away at the 95% of the economy target, but till now I'd wager AI has put many more people to work than out of it.


You have a good point. However, to be a bit pedantic, fully reusable rockets, which we are very much knocking on the door of, are a major stepwise improvement.

It just goes to show that technological advancement can happen rather unpredictably.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: