So for how long will the python community pretend that performance isn't a problem? (Inb4 "all the intensive parts of my app are written in C", addendum: "my app isn't CPU bound", "we have libraries written in C")
The interpreters (esp CPython) need some hardcore engineering (like how javascript got V8). Languages that have the same theoretical performance limits of Python are now blowing it out of the water.
The stories about teams that were able to go down from N servers to 1 server by switching from Python to Go/Scala/node.js will hurt it in the long run.
> So for how long will the python community pretend that performance isn't a problem?
Which python community are you talking about? Cython, numba, numpy, pypy?
> The stories about teams that were able to go down from N servers to 1 server by switching from Python to Go/Scala/node.js will hurt it in the long run.
Have you followed anything that is going on or just link-bait articles? Nearly everyone that re-implements something in another language uses different paradigms during the re-implementation, because they've learned something from the original implementation. Besides, those articles normally don't discuss any of the opportunity cost of re-invention or the long-term maintenance costs. I'm not saying they can't improve those things, but until you have the TCO over a few years, it's not an accurate portrayal.
The large amount of excellent work being done on PyPy would seem to indicate that the Python community does not, in fact, pretend that performance isn't a problem.
I don't think it is denial per se, but rather mostly inertia. Python has been a victim of CPython's success, in the sense that there is now a huge body of work that is tied to the CPython API. This is a problem for IronPython and Jython, and continues to be a problem for PyPy (though they are making strides to solve it), and it will be a problem for newer projects like Pyston. Numba has sort of skirted the issue by integrating with CPython, but the jury is still out there.
>>The stories about teams that were able to go down from N servers to 1 server by switching from Python to Go/Scala/node.js will hurt it in the long run.
concurrency designs and architecture play a greater role than the language , most of the times.
Bottle-necks are often in bad-sync patterns , data-structures or overall designs.
Language plays the role for semantic verification,formal models (for model-comparison...) ,type-system designs , proof systems etc .GIL is just one!
Can't compare language vs another.
After which the buck passes on to underlying implementation but cpython in our discussion is only thrown as a reference implementation and for production grade interpreters we might have to look outside like one by enthought.
However language's built-in patterns to help advertise concurrency is another question altogether.
Even scalable web-whatever is often IO-bound... of my several apps in production, there's only one where CPU is a serious concern, and I run that one on PyPy. cPython is fine for the others.
I'll go so far as to say that we haven't done enough with making things fast on multicore processors except with IO-bound tasks. The current architecture of multicore processors stinks for achieving anything but the "embarrassingly parallel."
The interpreters (esp CPython) need some hardcore engineering (like how javascript got V8). Languages that have the same theoretical performance limits of Python are now blowing it out of the water.
The stories about teams that were able to go down from N servers to 1 server by switching from Python to Go/Scala/node.js will hurt it in the long run.