A hearty second for mechanize. It basically wraps urllib2 which is neat. Although, I've encountered situations where language support for distributed systems would have really saved some frustration. There's a version of mechanize for Erlang [1], which I intend on trying out whenever I get around to learning Erlang :)
To me, the most interesting thing about PNaCl is that it'll be the first real test of LLVM as a portable assembly bitcode, rather than just a compiler IR. There are arguments for why LLVM may not be such a good idea, but given the momentum it's been making, I can only see good things happening for LLVM if more people use it.
This is true for LLVM bitcode in general, but PNaCl specifies an abstract machine that defines pointers as 4 bytes long. It also specifies Little Endian &c.
It would be unethical to maintain a harmful allele of a gene, assuming that the particular allele is unequivocally harmful.
But that is not always the case. The textbook example is sickle cell anemia: although it is very harmful in homozygous individuals, it confers resistance to malaria without negative side effects in heterozygous individuals, which is beneficial.
There's the possibility that a harmful gene today becomes beneficial in the indeterminate future, for a reason that we cannot predict. That is the logic behind genetic diversity in a species, which allows it to cope with new and unpredictable environments by essentially allowing alleles to compete in the "natural marketplace."
If we're going to take control of our genomes and select for ourselves which alleles are harmful or beneficial, we must at least be prepared to preserve genetic diversity, if not in living individuals, then in gene banks or genomic databases.
A search on Wikipedia turned up this 1976 paper [1] (unfortunately, behind a paywall), which derived a memristor circuit emulating Hodgkin-Huxley dynamics, a good biophysical model of the membrane potential of a neuron. In particular, it can show the generation of action potentials.
The relevant part is on p. 210: "In particular, the potassium channel of the Hodgkin-Huxley model should be identified as a first-order time-invariant voltage-controlled memristive one-port and the sodium channel should be identified as a second-order time-invariant voltage-controlled memristive one-port."
There's a really cool experimental history behind understanding the cochlea, especially concerning George Zweig, once a particle physicist. A lot of this work was done in the 70s and is, IMHO, some of the best examples of biophysics you can find.
To nitpick at the math: "No free lunch" results are asymptotic in the sense that they necessarily hold over the _entire_ domain of whatever problem you're trying to solve. Obviously, algorithms will and do perform differently over the relatively few inputs (compared to infinity...) that they actually encounter. It's similar to undecidability: just because a problem is generally undecidable doesn't mean you can't compute it for certain subsets of input, and compute it reasonably well (for some definition of reasonable).
Agreed... I was in a rush to catch the train this morning and I didn't have chance to elaborate, I shouldn't do that.
However, my point was that most of the algorithms used on that link (ANN, SVM, etc) had similar expressive power (VC dimension) and had been proved to have similar performance between them in object recognition.
People normally take advantage on their specific properties rather than paying too much attention how well the algorithm would perform (since either SVM and ANN are expected to perform reasonably well). I still maintain my opinion that any difference in classification performance is more likely to be related to how the team managed the data instead of the chosen algorithm.
Deep convolutional learning is the difference here and indeed seems to be an interesting architecture which the current state of the art only support ANN. But that doesn't mean that somebody wouldn't come up with a strategy for deep learning on SVM or another classification technique in the future.
Although SVMs and layered neural nets have similar expressivity, the similarity is very much like turing completeness. i.e. Can't tell aparts the haskells from the unlambdas. SVMs express certain functions in a manner that grows exponentially with input vs a deep learner which tends to be more compact. The key to being a deep learner is in using unsupervised learning to seed a hierarchy of learners learning ever more abstract representations.
I'm not a condensed matter researcher, but from what I know these particular majorana fermions are quasiparticles, so it's a little misleading to directly compare this to the Higgs boson which is an elementary particle. Still, this is cool because it means topological quantum computing might be possible:
A more extreme example along the same lines as Watchmen is the anime series Neon Genesis Evangelion. NGE totally rips apart mecha anime and gives its characters psychological issues to tragic effect, mirroring the creator Hideaki Anno's own struggle with depression.
While I like NGE very much, the all psychology aspect of it is mostly just nonsense, though. I have never seen anyone being able to explain it from beginning to end without contradicting themselves. That's part of the magic why people find it superb : they don't get it because there's nothing much to get :)
Looks a lot like TeX, which is a good thing. In fact this might even be better. I'm tired so I might be missing an obvious weakness but to take the analogy a bit further here are some comparisons between this suggestion and a more TeX-like version.
The author's suggestion:
{b {i This is in italics and bold.}}
{Henny+Penny We can use google fonts anywhere if we just import them first with the google-font code}
{macro foobar {u {b %s}}}
TeX-like (yes I'm making up the keywords):
{\b {\i This is in italics and bold.}}
{\font {Henny+Penny} We can use google fonts anywhere if we just import them first with the google-font code}
{\macro {foobar} {\u {\b #1}}}
I would definitely prefer the first alternative over the TeX-like. The analogy also suggests though that instead of HTML "<br />" you could have a TeX-like atom "\br" instead of "{br}"; saves only one character, but easy to see inside a block of text.
Main problem with author's suggestion is that he mixes everything together, so he'll get into the markup version of DLL-hell.
E.g. How do I use relative links? Is {subdir Hello world} a relative link, a font-name, or a new and yet unsupported tag?
Html handles this: <a href='subdir'> versus <font name='subdir'> versus <subdir>...
Oh, and why support font names and colors directly in tags in 2012? He should support class names instead!
Why is "fontname from URL" hardcoded for Google fonts? Why not a generic syntax that handles whatever site you might want to use.
Why support simple macros without any support for formatting numbers and currency? Your server-site language should support this, so why send it to the browser?
Image (pic) elements are missing height/width, so we're back to the relayout flashes that the NCSA_Mosaic browser had whenever it loaded an image.
Exercise for the reader: let your editor remove one } by random. Figure out yourself where it's missing by just reading the source.
you can use the _style attribute with any element. For example {b_style="font-size:20;font-family:Courier New" content}. I intended it to work with other html attributes, so using multiple attributes would look like this {b_attr1='asdf'_attr2="asdf" content}. It would be just like html almost except for the quirky syntax. I only allow syntax for now because it's intended to be safe and I didn't want to allow things like onclick or anything javascript. Allowing a class attribute is also easy, but for now I didn't want to because the site I will be adding it to soon could use a previously defined class that's width 600 or something. Right now style is well controlled if you try to make things too wide or use something like display:none
[1] https://github.com/tokenrove/mechanizerl