Ruby's string concatenation syntax is borrowed from C, and it's a major win: it allows you to spread a single string onto multiple lines without backslash-quoting all the newlines.
This has got to be one of the worst pieces of gushing fanboy-ism I've run into lately. "Look how cool Ruby is, because you can do these crazy, unreadable, unmaintainable things. I don't know why you'd want to do them, but isn't it awesome that you can? Of course, us Ruby programmers are so smart that we would never actually use any of this hard-to-understand stuff, unlike those silly Perl programmers."
I don't know why you'd want to do them, but isn't it awesome that you can?
I like that sentiment, especially if it's tempered with not actually doing any of them until you run into a legitimate use case.
But in the meantime, finding weird corner cases seems very hacker-ish to me. It may not make any sense to me, but someone else may look at the article and say to themselvs, "aha!" and they may discover a new way to do something.
I can't help but think that Symbol#to_proc probably looked just as weird the first time it was explained to someone: When the interpreter sees &, it sends the #to_proc method to its subject and converts the result to a block. So if we implement a #to_proc method for Symbol we can use &method_name instead of writing out a block...
All that being said, I have no idea what you've been reading lately, so maybe it really is the worst you've encountered. Under the circumstances, I won't actually disagree with you, just share my own perspective :-)
I think it would be better to either find a legitimate use for the feature, or conclude that it might be a mis-feature.
For example, I can see the value of allowing a default function argument to be a Proc/lambda. I can even imagine how it might make sense to allow it to be a named function declaration -- if def actually returned a function object, rather than nil as the article notes. But what the author demonstrates seems to me nothing more than a broken consequence of Ruby's syntax.
Finding weird corner cases may indeed be very hacker-ish, but blindly labeling them as good is not, in my opinion.
I could be wrong about the value of this construct; someone might come up with something useful you can do with it. But, until then, I'm not going to view it as anything other than a curiosity, not as an example of "ZOMG look how cool Ruby is!!11!".
Regarding Symbol#to_proc, I would say that it is only syntactically weird. But if you have any background with functional languages, it's an incredibly natural semantic construct.
Finding weird corner cases may indeed be very hacker-ish, but blindly labeling them as good is not, in my opinion.
Did Thomas do that? I don't see anywhere where he "blindly label(s) them as good". Right there in the title he calls them Quirks, and he says you have to love them. Then, in the text he explains, quite clearly, that you should almost never use these in actual code.
I think you've missed the point that the examples presented (especially the function definition in a default argument) merely show how Ruby works; specifically that all code is executed as expressions during runtime. To me these "quirks" are similar to showing somebody that dereferencing an offset to an array pointer in C is the same as using an array index. You're not going to do it in production code, but knowing that it's possible helps in better understanding what's going on with all that code you write.
I'm not so sure. It just read like a series of interesting tricks, akin to when someone dives into lisp and starts to realize the cool things you can do with such syntactic flexibility. Of course you'll have a few, "Wow, that's cool! I can't imagine it being useful, but wow" moments. And I'm glad that he's chosen to share it.
I'm not a huge fan of flexible syntaxes, which might be just a bunch of language engineers revolting against periods, semicolons and parentheses. I happen to like my code to be easily readable and I'm willing to give up the time lost in typing a few extra keystrokes for that benefit.
I think you're doing something wrong if you need glyphs and punctuation for your code to be readable. I get dots and all that, but semicolons and friends are just noise.
A flexible syntax introduces ambiguity, which forces me to think even more. Consider this simple example (which may be wrong for all I know):
x = foo bar, baz
Is it x = foo(bar, baz), x = {foo(bar), baz} or something else? I could probably derive the true meaning by looking at surrounding code, but I shouldn't have to.
Well, sure. You can write code like that, but that doesn't mean you should. I'm not arguing for so-called "poetic mode" code (I hate it actually), but I think languages like Ruby and Python strike a really solid balance between lack of line noise and clarity if you follow what one might call normal coding conventions established by the community. Of course, these aren't canonized for the most part, so it's hard to know exactly what they mean, but I think everyone can agree that "x = foo bar, baz" should be clarified to "x = foo(bar, baz)," if nothing else because it will eliminate any parser confusion if you add a predicate conditional or something like that.
Anyhow, my point is that I can write C or Java or whatever else that's just as unreadable due to its lack of clarity, but that doesn't mean I will. It's a skill issue more than a language issue. ;)