Hacker Newsnew | past | comments | ask | show | jobs | submit | fadmmatt's commentslogin

Yes.


It will look suspiciously similar to the Scripting Language Design and Implementation class that I'll be teaching.

This kind of stuff also showed up in my program analysis and advanced compiler classes, too.

One of the class projects was to implement first-class macros, and we used an interpreter that looked a lot like the one from this article:

http://matt.might.net/articles/metacircular-evaluation-and-f...

(/article author)


Sure, but the article doesn't use eval.


You make your own eval for the lambda calculus, so that's not entirely accurate ;)

Edit: (Yes I know the difference, and I changed the he to 'you')


That's a good question, actually.

Lisp macros need knowledge of the structure of the s-expression.

Certainly one can Church encode s-expressions and provide defmacro.

Gensym is trickier. Hygiene might be really challenging.

I might actually try that out.

(/article author)


"Lisp macros need knowledge of the structure of the s-expression."

I recently rediscovered this for myself: http://arclanguage.org/item?id=11880


I was an entrepreneur (failed twice), and I'm now a professor.

Being a pre-tenure professor is way more terrifying than being an entrepreneur was.

And, depending on the field, between 25% and 75% of your salary as a professor will come from being able to procure external funding.

If you can't convince the funding agencies to pay you, then tenure buys you an office, a teaching load and health care.

It's been terrifying for me because my hit rate is about the same as Matt's. I've had very little luck getting funding for my research.

And, at the last funding panel I served on, the funding rate was down to 5%. My own fund-seeking overhead is now at 60% of my time, and I'm still not getting any.

Either we have too many scientists, or not enough science funding. I don't think the current system is sustainable.


Either we have too many scientists, or not enough science funding.

A little bit of both, to which I would add a third: the way we fund science is... I'm going to charitably say "sub-optimal"


And then there's us lot in non-science disciplines who sit back and think "gosh, lucky scientists, they get all the funding..." :)


It tends to produce a treadmill, though--- since science funding is available, science profs are expected to get it. In some areas with less funding, it's perfectly normal for professors to have their students TA most of the time; but in most science departments, the prof's expected to pay a substantial proportion of their students from grant money, and it'll look bad if a prof is always "dumping students on the department" by funding them through TAships.

And if you're expected to pay your students, it takes a lot of money! One student, including tuition, stipend, and departmental overhead, costs around $50k, so if your lab is 5 students, you have to bring in $250k a year just to support your grad students. And if you start dipping below about $150k, so are supporting fewer than half your students, people will start grumbling, and it'll look bad for your tenure case. (You can't avoid it by having fewer students, either, because having only 2 students will also look bad for your tenure case.)


You can't do this with #define alone. You can pass these "functions" around arbitrarily, and it still works.


When I taught advanced compilers last semester, one of the projects was a Scheme-to-C compiler that implemented first-class continuations via call/cc.

I realized when going over that material, however, that most students didn't know what a continuation was, or how to use them, so I created a "by-example" tutorial:

http://matt.might.net/articles/programming-with-continuation...

It covers exceptions, backtracking-search, the magic "amb" function, magic sat-solving, generators and cooperative threads.


That's a fantastic resource! I've been fascinated with continuations recently because of just what you cover: they are an tool that can be used to implement many higher-level tools, like exceptions, generators, and cooperative threads. Things like this get much more clear when you see how closely they are related.

I am curious about your assertion that "with macros, it is not difficult to simulate preemptive multithreading". How would one go about doing this safely?

(The font changes partway through, btw. Maybe you forgot to close a tag?)


Good question.

In short, you can use the macros to hide the context-switching (yielding).

You could, for instance, force every function to perform a context switch right after its called. Since languages like Scheme encode even loops with function call, you're guaranteed to eventually hit a switch point. (And, if you allow other looping constructs, you could use macros to hide a yield inside them as well.)


Neat; but C++0x does this:

[](int x, int y) { return x + y; }

instead of this:

lambda<int> (x,y) --> x + y

Ironically, the template hack could be shorter in many cases.


And those few extra characters give you an enormous amount of flexibility, such as not needing to define wrapper template specializations for all your types, not needing to have extra variables sitting around to define your argument placeholders, and being able to use more than just simple expressions.


(Article author here.)

I'd love to use Haskell, but I'm helping out on an exascale DoE project, and C++ is all we're allowed to use. Ugh.

Of course, lambdas don't make C++ more powerful, but that's just reductio ad Turing tar-pit.

You can turn that argument around, too: there's no problem you can solve with C++ that you can't solve with lambdas alone:

http://matt.might.net/articles/church-encodings-demo-in-sche...

If programming languages were about computational power, we would have stopped with Fortran.

Programming languages are about the freedom of expressions.


"If programming languages were about computational power, we would have stopped with Fortran.

Programming languages are about the freedom of expressions."

I strongly disagree, /some/ programming languages are about expressive power - others are about computational power.

We didn't stop with Fortran, we moved onto C then C++ - they might not be able to do anything more in the computer science theoretical sense, but in practice they produce faster executables, and in real world applications that means more computational power can be brought to bear on problems with these sorts of languages.

Maybe speed doesn't matter for your problem, or lambdas are an especially nice fit. Mainly I'm just worried that you are going to produce substandard code with this kind of mindset... good code is written with the language's strengths and weaknesses in mind, not to enable the flavour-of-the-month paradigm to be used in an extremely sub-optimal fashion.


> [...] but in practice they produce faster executables [...]

C++ is faster than Fortran?


In all tests I've done - yes it is. Although I haven't tested every compiler and platform for obvious reasons.


For anyone that uses an ML (e.g. OCaml), Haskell or Scala, Milner was responsible for the Hindley-Milner type system and inference algorithm that sits at the foundation of these languages.

It ignited a wave of research into type theory that continues to this day, yet in many ways, Hindley-Milner is still the most significant contribution to the field.

I remember reading his original paper on polymorphic typing for my qualifiers and being struck by the elegance and approachability of his writing.

Well worth a read.


Any chance you could post some info about the paper? I've been itching to learn more about how type inferencing (especially polymorphic type inferencing) works.

EDIT: I don't know if it's the paper you were referring to, but I did find this: http://groups.csail.mit.edu/pag/6.883/readings/p207-damas.pd...


Milner, Robin (1978), "A Theory of Type Polymorphism in Programming", Jcss 17: 348–375

I don't know if there's a free copy online anywhere.

The Damas-Milner paper is the sequel; it presents an alternate algorithm for type inference.

Benjamin Pierce's "Orange Book" is one of the best references now.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: