>>> you are still talking about a final optimization problem of 4^1,000,000,000.
There is no final optimization step that analyze the 4^1,000,000,000 possibilities. We are not the best possible human-like creature with 1,000,000,000 pairs of bases.
> method of gradient descent
Do you know the method of gradient descent? Nice. It is easier to explain the problem if you know it. In the method of gradient descent you don't analyze all the possible configurations and there is no guaranty that it finds the absolute minimum. It usually finds a local minimum and you get trapped there.
For this method you need to calculate the derivatives, analytically or numerically. And looking at the derivatives at a initial point, you select the direction to move for the next iteration.
An alternative method is to pick e few (10? 100?) random points nearby your initial point, calculate the function in each of them and select the one with the minimum value for the next iteration. It's not as efficient as method of gradient descent, but just by chance half of the random points should get a smaller value (unless you are to close to the minimum, or the function has something strange.)
So just this randomized method should find also the "nearest" local minimum.
The problem with the DNA is that it is a discrete problem, and the function is weird, a small change can be fatal of irrelevant. So it has no smooth function where you can apply the method of gradient descent, but you can still try picking random points and selecting one with a smaller value.
There is no simulation that picks the random points and calculate the fitness function. The real process in the offspring, the copies of the DNA have mutations and some mutations made kill the individual, some make nothing and some increase the chance to survive and reproduce.
There is no final optimization step that analyze the 4^1,000,000,000 possibilities. We are not the best possible human-like creature with 1,000,000,000 pairs of bases.
> method of gradient descent
Do you know the method of gradient descent? Nice. It is easier to explain the problem if you know it. In the method of gradient descent you don't analyze all the possible configurations and there is no guaranty that it finds the absolute minimum. It usually finds a local minimum and you get trapped there.
For this method you need to calculate the derivatives, analytically or numerically. And looking at the derivatives at a initial point, you select the direction to move for the next iteration.
An alternative method is to pick e few (10? 100?) random points nearby your initial point, calculate the function in each of them and select the one with the minimum value for the next iteration. It's not as efficient as method of gradient descent, but just by chance half of the random points should get a smaller value (unless you are to close to the minimum, or the function has something strange.) So just this randomized method should find also the "nearest" local minimum.
The problem with the DNA is that it is a discrete problem, and the function is weird, a small change can be fatal of irrelevant. So it has no smooth function where you can apply the method of gradient descent, but you can still try picking random points and selecting one with a smaller value.
There is no simulation that picks the random points and calculate the fitness function. The real process in the offspring, the copies of the DNA have mutations and some mutations made kill the individual, some make nothing and some increase the chance to survive and reproduce.