Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, because examining the old source code allows you to predict its behaviour, including the rewriting of source code.

This is the halting problem (http://en.wikipedia.org/wiki/Halting_problem), and there is no solution.



You are incorrect, the halting problem only proves that you cannot solve it in the general case. A very significant subset of programs can be statically determined; it's easy to prove that "main(){}" halts and that "main(){while(true);}" doesn't. It should be trivially obvious that you could group all programs into "Halts" or "Unknown" with no false positives simply by executing the program for X steps and observing the result.

If this was actually a concern of the programmers, they could design the program carefully to ensure it falls into the Halts category.


A very significant subset of programs can be statically determined...

Technically this may be correct, but I feel confident in asserting that a transhuman AI would not fall into that subset. You would have to run a second AI with the exact same inputs in order to make your 'prediction', leaving you in the same predicament with the second AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: