Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure this constitutes any problem other than a lack of understanding of the python runtime. What the author describes as:

"the mutable default parameter quirk is an ugly corner worth avoiding"

could also be described as:

"a natural outcropping of python's late binding, "names are references" variable model, and closure mechanisms, which provide a consistency to the language that is often crufted up in others"

I do somewhat agree with the author that this particular functionality should be a "use only when needed" feature. I don't think it should be avoided at all costs tho, because there are times where the mutable default allows for a lot of saved code. In fact in a few cases the code to work around using mutable defaults can get into some serious voodoo because frequently the writer is actually trying to work around the bigger mutable/immutable objects and names are references "issues" in python.

This also reminds me of something I was reading on the front page today about the old 'use the whole language' vs 'simplicity is king' holy war.



"a natural outcropping of python's late binding, "names are references" variable model, and closure mechanisms, which provide a consistency to the language that is often crufted up in others"

hm... that's debatable. The implementation could have just as easily chosen to evaluate the default arguments each time the function is invoked and that decision wouldn't have broken any of the existing mental models of variable binding/closures.


Precisely because Python has late binding you would expect the parameters to be evaluated on each call to the function.

One thing Python lacks is the ability to use preceding arguments in defaults, e.g. you cannot do this:

    def f(a=3, b=a+1):
        return (a + b) / 2
    
    NameError: name 'a' is not defined
Oops.


There is no order for named arguments. You could call the function like this after all:

  f(b=3)


Of course; in that case the default value expression for `b' would not be evaluated.

Common Lisp does this right.


That hides default logic in the signature. Why would you favor that over

  def f(a=3, b=None):
    b = b or a+1 # or use a more explicit version
    return (a + b) / 2


That is code is logically convoluted — that's one reason not to love it. Why do you think is the more straightforward statement, just in general?

- "Let B be one greater than A unless otherwise specified."

- "We have no default value for B. If B has a value, then let B be equal to that value. If B does not have a value, then let B be one greater than A."


Because that can be said more succinctly. It is even easier to read as there is less to read, which I realize is mostly subjective.


I'm pretty sure this is the same argument used to make Perl a bad guy.


I disagree, but we've gotten a bit off topic.

I think Python's behaviour is confusing and basically never what anyone actually wants. Regardless of whether or not you can use other params in defaults, the defaults should be evaluated on each call.


"a natural outcropping of python's late binding, "names are references" variable model, and closure mechanisms, which provide a consistency to the language that is often crufted up in others"

My mileage varies.

I'd prefer default parameters to honor referential transparency, whatever hoops the runtime has to jump through to make this happen.


That would make that the only place in which Python has referential transparency, though. It may be a quirky side-effect of consistency, but it is consistent.


Strings are also non-mutable, and thus have referential transparency. And so do numbers in Python.


Referential transparency is a property of functions, not data (unless you're in a lambda mood and treat them as functions of zero arguments, but in that case you're not in Python so it's not relevant here). Even a function to "concatenate two strings" could be passed an object that overloads the addition operator to cause arbitrary modifications:

    >>> class Evil(object):
            def __init__(self):
                self.evil = 1
            def __add__(self, other):
                result = ("%s" % self.evil) + other
                self.evil += 1
                return result


    >>> def referentially_transparent_concat(a, b):
            return a + b

    >>> e = Evil()
    >>> print referentially_transparent_concat(e, "hi")
    1hi
    >>> print referentially_transparent_concat(e, "hi")
    2hi
You can program in a referentially-transparent style with Python, but you'll have to do it by adding your own restrictions to the code you write. Python will not help you with that.


In your code the devil is in the data-type.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: