Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a funny thing:

Advocates of dynamic languages tend to claim that the flexibility they offer — dynamic duck typing, dynamic dispatch, runtime reflection, eval — is a major advantage.

And yet every time someone actually tries to meaningfully use those features, they say ‘why would you do that, it's too confusing’ and tell people to stick to writing code that's just as easily expressed in a statically-typed, statically-dispatched, AOT-compiled language, while still paying the costs of their environment supporting those features.

If you're going to write Python like C, why even bother?



Just because you can, doesn't mean you should.

A large part of Python's popularity is due to the fact that there's a reasonably well defined 'pythonic' way to do things, that everyone can learn and then have a decent experience using and reading code produced by others.

You can implement fancy operators, overloading, entire DSLs in Python; but by doing so you break the pythonic contract and make your creation stand alone with a separate learning curve. There are some valid reasons to do this, especially for bespoke in-house tooling, but open source modules intended for mass use have virtually no justification to deviate from the primitives which the entire community is used to.


> Just because you can, doesn't mean you should.

I think this is very much true, but actually I disagree with you when it comes to OSS. For example, Django makes heavy use of metaclasses in order to simplify its API, and I think that's fine, because no junior developer realistically needs to contribute to such a project. They can work on a project which uses Django without needing to understand the internals.

Having said that, I was only introduced to SQLAlchemy a couple of years ago, when already pretty competent at Python. Their filter syntax (ab)uses __eq__ to allow you to write expressions such as `MyModel.my_field == 'query'` which return an expression which can be evaluated dynamically when applied to a SQL query. I did a double take when first looking at this, assuming it at first to be a typo. I then ended up digging into the internals of SQLAlchemy to find out how it all fits together. The upshot was that I explored the SQLA API in great detail. The downside is I spent a few hours doing it :D


If all we do is write Pythonic code (especially now that "Pythonic" seems to include type hints), what's the benefit of the highly dynamic CPython virtual machine?

Surely a faster VM, or even an ahead-of-time compiler, would be possible if we give up on some dynamism? Is that a direction the community should take?

(I think Guido's answer would be no, based on his apparent dislike of existing "Python compiler" projects such as Nuitka.)


I have never heard anyone claim that eval is a good thing.

I use dynamic duck typing and runtime reflection in some places in the Python I write.

For instance, I might attach an extra attribute to an object and use that later on - e.g. a request object that lives for the duration of the request and is discarded later on.

Or rewrite a certain function into a loop that goes over the attributes and does the same thing to each of them.

But I could live without them, at the cost of some contortions.

I think the really big benefit is not having to spec out unimportant infrastructure between functions in a module. The lack of a spec makes it easier to keep local things local.


eval(/exec) is how dataclasses (and namedtuples before them) work: https://github.com/python/cpython/blob/4c1b6a6f4fc46add0097e... . It is a big gun, but sometimes (usually deep within lib code) big guns are the right answer. Don't use reflection or runtime bytecode emission in Java.. unless you have to. Don't drop to inline asm in C.. unless you have to. And so on.

And, of course, if it _is_ the right decision to use such a tool be aware of just how easy it is to use big guns wrong. Even this very article's 'just do it' tone seems to convey a lack of respect for decorators, what I'd consider a 'medium gun' in python. So many intermediate python programmers write decorators that don't properly interoperate with the descriptor protocol and thus either fail to work on instancemethods (as here: https://repl.it/repls/ThreadbareCurlyKeyboard ) or hardcode a 'self' arg in their wrapper and thus don't work on global functions. I'm fine with simplifying it for an article but for production code this is a pet peeve of mine :p


> or hardcode a 'self' arg in their wrapper and thus don't work on global functions. I'm fine with simplifying it for an article but for production code this is a pet peeve of mine :p

To be fair, this is such an easy class of mistake to make that the standard library does it. @functools.lru_cache is bugged for instance methods.


I think it really depends on both language and language culture. Dynamic Dispatch and function overloading is the basic way to program in Elixir, every basic tutorial will teach newcomers how to do it, linters will complain if your methods have too much conditionals inside instead of outside and programs will elegantly look like state machines at top level. On the other hand Elixir also has macros, but they are instead discouraged and considered an advanced topic, while in other languages like racket and lisp languages they are usually a prominent tool.

What enters in play is the principle of the least surprise. If a feature is frequently used then everyone can identify it, know it's effects and understand it's limitations, and so it stops being confusing. If it's a feature built for the 1% libraries that requires some special DSL syntax and you can only find deep in the manual or in advanced books then it's probably something you should use very sparingly.


Because when you always take advantage of all that flexibility, you end up with Perl.

Everyone was always so excited to show off their mastery of Perl and do fancy things that it was very hard to maintain, and Perl got a reputation as a "write-only"[1] language.

You should follow the principle of least-surprise, and follow the idioms of the language. It's great to have the flexibility when you absolutely need it, but that should be reserved for rare cases and be very well commented.

[1] https://en.wikipedia.org/wiki/Write-only_language


I don't know about "every time"; it's more that they should be used judiciously. Using them to allow you to have two different functions that perform two different calculations but bear the same name wouldn't qualify as judicious use in my book.

There are similar problems with getting a bit too happy with macros in Lisp, or type-level programming in Haskell, or templates in C++, or self-modifying code in assembly language. In all cases, the principle is the same: In general, you should always prefer the simplest way to get the job done.


I phrase it this way:

It's a lot easier to understand, say, decorators in python than annotations in Java.

The other thing is that a lot of the dynamic languages are getting cool features to support static dynamism really well. Protocols and literal types being my two favorites in python, which allow statically-verifiable duck typing, and argument controlled return types respectively.


That flexibility is very useful when used sporadically. It becomes unreadable when people use it too much or in the wrong places.


> If you're going to write Python like C, why even bother?

I agree with you, but I suspect a lot of people fall into the "I want to be able to do this, but I never want to see anyone else doing it" camp.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: