who is talking about neurons? Beneficial random mutations propagate, negative don't, on average. In this way, the genetic code that survives mutates along the fitness gradient provided by the environment. The first self-propagating structure was tiny.
It's not literally the gradient descent algorithm as used in ml, because individual changes are random rather than chosen according to the extrapolated gradient, but the end result is the same.
>computationally intractable nature of 4^1,000,000,000
which is a completely wrong number, even if only because of codon degeneracy. Human dna only has 20 amino acids + 1 stop codon, which are encoded by 64 different sequences. Different sequences encode the same amino acid.
who is talking about neurons? Beneficial random mutations propagate, negative don't, on average. In this way, the genetic code that survives mutates along the fitness gradient provided by the environment. The first self-propagating structure was tiny.
It's not literally the gradient descent algorithm as used in ml, because individual changes are random rather than chosen according to the extrapolated gradient, but the end result is the same.
>computationally intractable nature of 4^1,000,000,000
which is a completely wrong number, even if only because of codon degeneracy. Human dna only has 20 amino acids + 1 stop codon, which are encoded by 64 different sequences. Different sequences encode the same amino acid.