I was an entrepreneur (failed twice), and I'm now a professor.
Being a pre-tenure professor is way more terrifying than being an entrepreneur was.
And, depending on the field, between 25% and 75% of your salary as a professor will come from being able to procure external funding.
If you can't convince the funding agencies to pay you, then tenure buys you an office, a teaching load and health care.
It's been terrifying for me because my hit rate is about the same as Matt's. I've had very little luck getting funding for my research.
And, at the last funding panel I served on, the funding rate was down to 5%. My own fund-seeking overhead is now at 60% of my time, and I'm still not getting any.
Either we have too many scientists, or not enough science funding. I don't think the current system is sustainable.
It tends to produce a treadmill, though--- since science funding is available, science profs are expected to get it. In some areas with less funding, it's perfectly normal for professors to have their students TA most of the time; but in most science departments, the prof's expected to pay a substantial proportion of their students from grant money, and it'll look bad if a prof is always "dumping students on the department" by funding them through TAships.
And if you're expected to pay your students, it takes a lot of money! One student, including tuition, stipend, and departmental overhead, costs around $50k, so if your lab is 5 students, you have to bring in $250k a year just to support your grad students. And if you start dipping below about $150k, so are supporting fewer than half your students, people will start grumbling, and it'll look bad for your tenure case. (You can't avoid it by having fewer students, either, because having only 2 students will also look bad for your tenure case.)
When I taught advanced compilers last semester, one of the projects was a Scheme-to-C compiler that implemented first-class continuations via call/cc.
I realized when going over that material, however, that most students didn't know what a continuation was, or how to use them, so I created a "by-example" tutorial:
That's a fantastic resource! I've been fascinated with continuations recently because of just what you cover: they are an tool that can be used to implement many higher-level tools, like exceptions, generators, and cooperative threads. Things like this get much more clear when you see how closely they are related.
I am curious about your assertion that "with macros, it is not difficult to simulate preemptive multithreading". How would one go about doing this safely?
(The font changes partway through, btw. Maybe you forgot to close a tag?)
In short, you can use the macros to hide the context-switching (yielding).
You could, for instance, force every function to perform a context switch right after its called. Since languages like Scheme encode even loops with function call, you're guaranteed to eventually hit a switch point. (And, if you allow other looping constructs, you could use macros to hide a yield inside them as well.)
And those few extra characters give you an enormous amount of flexibility, such as not needing to define wrapper template specializations for all your types, not needing to have extra variables sitting around to define your argument placeholders, and being able to use more than just simple expressions.
"If programming languages were about computational power, we would have stopped with Fortran.
Programming languages are about the freedom of expressions."
I strongly disagree, /some/ programming languages are about expressive power - others are about computational power.
We didn't stop with Fortran, we moved onto C then C++ - they might not be able to do anything more in the computer science theoretical sense, but in practice they produce faster executables, and in real world applications that means more computational power can be brought to bear on problems with these sorts of languages.
Maybe speed doesn't matter for your problem, or lambdas are an especially nice fit. Mainly I'm just worried that you are going to produce substandard code with this kind of mindset... good code is written with the language's strengths and weaknesses in mind, not to enable the flavour-of-the-month paradigm to be used in an extremely sub-optimal fashion.
For anyone that uses an ML (e.g. OCaml), Haskell or Scala, Milner was responsible for the Hindley-Milner type system and inference algorithm that sits at the foundation of these languages.
It ignited a wave of research into type theory that continues to this day, yet in many ways, Hindley-Milner is still the most significant contribution to the field.
I remember reading his original paper on polymorphic typing for my qualifiers and being struck by the elegance and approachability of his writing.
Any chance you could post some info about the paper? I've been itching to learn more about how type inferencing (especially polymorphic type inferencing) works.