I am (now) a lisp programmer and find myself boring many people to tears about it. But perhaps I am first a language guy, and like to see new solutions to problems. None give me the satisfaction that Lisp has (smalltalk was close, however). But I like to see new well-designed languages and how well, or not, they address the engineering challenges.
For example, turbo pascal was written during the time that I was writing a pascal compiler. The Pascal standard was truly a harsh mistress, and turbo pascal made a lot of very useful engineering choices to make a fast, very useful compiler in a small environment. Anders went on to do C#, which itself was a good engineering feat.
I watch with interest Clojure and the ecocsystem that it plugs into and it has a REPL.
But it is a pragmatic fact of life that C is a very large force in the day-to-day world, along with a tool used by serious hackers (c.f. Coders At Work).
Python is very nicely designed and gets you to a higher level of programming with ease.
So I was curious when Go came along, as a language development observer, to see what were they coming up with.
It is clearly a language that has had a lot of thought put into it and it seems to be something that will challenge C on its own turf ultimately. Faster than Python, and possibly ultimately approaching C in speed, it has some useful new ideas. In particular, how it does goroutines and channels is quite refreshing. It addresses the admonition that "threads won't work if they are just libraries" with new language constructs.
So criticism of language ought to be tempered by some experience with the language. Having done enough of them, it seems that I often don't have a full appreciation for a language until I have done something significant in that language. So I feel comfortable offering severe criticism of languages like RPG-III and Fortran II and Altair Basic and Bliss-36.
I am probably to the point of being a lisp snob by now, but do appreciate the finer points of other languages.
I think Go is a step in the right direction for the problem that it wants to solve.
how can you say that C is challenged by Go? Go is built with C. This is true with python as well - in being a C abstraction. C is well suited for the UNIX-like programming, where you create efficient system libraries. With as little abstraction as possible I might add.
> "Programming language quality is usually inversely proportional to the number of special forms"
Programming language obscurity is also usually inversely proportional to the number of special forms. Church numerals?
The happy place is somewhere in the middle, where there's enough language such that you don't have to build it out of other pieces - and this also helps with performance, static analysis, tooling, debugging and lots of other areas - but there isn't so much that you end up with lots of methods to do the same thing, in similar but incompatible ways.
Having few special forms does not rule out syntax or having useful functions. Look at Scheme, for example. (Or Haskell, most of which syntax desugars into functions you could write yourself, like records or do-notation or list comprehensions.)
And yet Antoine is famous for book-length works, not poetry, and certainly not minimalist poetry.
Don't take the lesson too literally. You can, in fact, take too much away. How much you need to put in depends on your artistic goals and, of course, on your audience.
Well, there's necessary complexity and unnecessary complexity. Perhaps he felt that in order to communicate the ideas and story in his book, he had to write a book length work.
My point is that with programming language abstractions, you can in fact remove things without hurting the theoretical power of the language, but actually harming the practical power of the language.
You do have to leave things in that you could take away.
I am very certain that almost all of them could be removed without reducing the theoretical power of Common Lisp. At worst, successive rewrites of the program could reduce it to using a different set of simpler forms.
Well yeah, you don't need all those special forms to keep the language Turing complete, but you do need them to provide a lot of what people regard as the "powerful" features of common lisp. I wouldn't care to program without flet, for example, but obviously one could remove flet without any loss of power in a certain sense.
No, I mean that you could create a Lisp which had fewer forms, but that provided these special forms - that you would program using - except that they would no longer be special, but rather synthesized out of baser forms.
"I could show these poor imperative programmers how their problems could be solved much easier in Lisp if I wanted to." That to me (as an outsider) seems to be the main message of these Lisp blogs. But at some point, nobody will believe you could do it, unless you actually do.
You seem to be complaining that... Lisp doesn't exist? You seem to be saying his complaints are of the from "A hypothetical language I have in my head can do that better", but the complaint is of the form "This existing language does that all and better."
Given that the only other "problem" in the post is a Hello World post, surely you aren't complaining that the author failed to include a "Hello World" lisp program? Well, here you go:
(print "Hello World")
I do not understand what you're saying at all.
(Also, I'm not a lisper. In fact I'm increasingly of the opinion that it represents a wrong direction for the future of programming languages. So this is not a "pro-Lisp bias" thing.)
I'm curious, i rarely see any REAL[1] criticism of lisp, just the old "ughh, my eyes, all those parenthesis, aaa" type of nonsense. Why do you think its a wrong direction?
[1] Im not talking about implementations, obviously, Common lisp, scheme and clojure have their problems, im talking about the general idea of lisp.
I think Lisp's philosophy, as explained by Paul Graham (perhaps you've heard of him :) ), is to enable the programmer to do anything they put their mind to. When choosing whether to do the powerful thing or the simpler thing, they will choose the thing that gives the programmers more power, and point out (not entirely incorrectly) that they can wrap up simpler versions for simpler cases with macros.
As a skilled developer, I certainly prefer this philosophy to the Java philosophy of "If the feature might be abused, leave it out." (Obviously, I simplify that, but there's a lot of truth there.) But I don't think it's the right way to think of languages as we go forth, and I don't think it can scale as we go forward.
Instead of seeing languages in terms of what they permit, I see them in terms of what they deny. In particular, what invariants do they maintain? Now, that might seem an odd way of looking at it, but what it is is support for the next question: What do languages build on top of this invariant?
All the languages that I am currently interested in, and all the ones that I think provide the way forward, maintain invariants that Lisp does not let you maintain, precisely because it gives the Lisp programmer too much power. One obvious example: You can't "mutable lisp" your way into a language that has immutable variables all the time. If the language provides for mutation, even if you program in a strictly immutable subset of the language, your libraries, including your very base libraries that make up the "runtime", are likely to be mutation-based. (Getting around this might be possible but hardly worth it.) Erlang builds a huge runtime around having such pervasive immutability, and while mutable Lisps can borrow large swathes of Erlang's libraries and capabilities, they are incapable of providing a guarantee at the language level that no function you ever call will mutate a value out from underneath you. Erlang is a great example of building on quite a few invariants (read "programmer limitations") and producing things that are harder and/or impossible without them.
Another example: Type system. Lisp certainly lets you layer type systems on top of your language, but without enforcement by the language, there's a barrier as to how much you can take advantage of it. What the ML family does with types is not something you can add to a Lisp and end up with the same thing you get with native language support. You can come close, but there's a final level of integration or loop-closing (don't have a clean word for this concept) that you can not get to.
Smaller example: Haskell's laziness. You can add laziness constructs to a strict language (see also: Python generators), but you can not macro your way to a fully-lazy language starting with a strict one, at least not in any reasonable manner. (This feature excites me much less, but it's another example of something you have to start with.)
When the entire philosophy is about empowerment, you close the doors to a lot of things that can only be obtained by carefully-selected disempowerment of the programmer, most of which are fringe right now, but which I think will become more important over the future. Truly ultimate power only comes from assembler; languages give you their power by restricting the assembler they can generate.
In the single programmer case, especially the mature single programmer case, "more power" may be a great thing, but I think it breaks down pretty quickly as you get more people involved, especially as you get less-than-experts involved, and I think a lot of Lisp's failures to take over the world come from this basic issue, and some second-order effects.
What I want to close with is something I already said: Certainly, if I had to choose between a conventional Java-style B&D language or Lisp, I'd take Lisp. But the interesting action is occurring in the fields dominated by constraining the programmer: parallel programming developments, STM, immutability and/or banning shared memory, and potentially other interesting things that all involve first taking things from programmers, then building on the resulting invariants that things like Lisp or Perl or Ruby make a big point of not constraining you with. (I'm interested in type-safe string manipulations to ban XSS at the type level, for instance.) There are Lisps doing those things, but notice precisely that they have to be built in at the language level; you can't just macro your way to Clojure's STM. (And if that does happen to be how it is implemented, I'd run screaming; STM isn't very useful unless it is a very strong guarantee. And any connection with other JVM libraries will be very hard, too.)
A lot of it has to do with the relative power of context in languages. If I write a statement, what other places in the code could the meaning of that statement be modified? Languages like Lisp let you modify the meaning of things in tons of ways. Languages like Java only let you change the meaning of things in certain carefully restricted ways, like object polymorphism.
In order to understand what is actually happening in any one part of a Lisp program, I may have to understand most of the rest of the program, if it is written in a particularly complicated way. This is usually never the case in Java.
Both static typing and immutability are trade-offs and we need to know when its appropriate to make them. I get concurrency for the price of immutability in clojure, BUT i could still screw things up using java, and i think that this is a good thing(but im a power hungry lisp programmer, obviously :D). Its my oppinion that what you call the WRONG direction is actually the OTHER direction.
Paul Graham often talks about the partial ordering of language goodness, but doesn't know what beats Lisp. Possibly still nothing does, but this provides the way to give concrete examples: Since no one language can choose all invariants, it is likely that over time certain invariants will come to be seen as better in some niches than others, so there will indeed be several "winners" at the top of the ordering based on different choice of invariants.
Lisp will still be one of them, the choice of no invariants that restrict the programmer. It's a viable niche. It just isn't the only niche. I think it's less viable than Lisp advocates think and that this is part of the reason it never has taken off, but here we enter the realm of opinion as there isn't anywhere near enough science to actually know. (Indeed, the whole of idea of "science" sounds weird here because we are so far away from having it at all.)
I also point to foldr's message in this thread. Strong typing has its pros and cons, even when done as nicely as Haskell's typing can be, and personally, I don't think that's ever going to change. Sometimes you're going to want it and sometimes not.
So, I guess a real criticism of Lisp would be that it is stuck in the past. Lispers are still looking down on C; they never try to look up. If they did, they would see languages like ML and Haskell have surpassed them in all respects. Heck, even Matlab and its descendants with the "everything is a matrix" philosophy have created a powerful new paradigm that is certainly not a special case of "everything is a list". In short, the world has flown by Lisp, and Lispers (at least the vocal ones on blogs) still did not notice.
ML and Haskell have surpassed them in all respects.
I must have missed the memo explaining why static typing is always better. Can you forward it to me?
Lisp [is] stuck in the past...
You lose a little credibility here for not specifying which lisp you mean. I'll assume you mean Common Lisp because:
- Scheme is in the middle of reinventing itself after the R6RS unpleasantness, and that process could have any number of interesting repercussions.
- Clojure is very young, and already has a good chance of becoming the first mainstream functional programming language. I'll leave the value of that distinction unspecified for now, but it would be interesting.
- Some of the features available to Common Lisp programmers in the past really are impossible to duplicate today, and will probably remain so for the forseeable future. By all accounts, the system-level introspection and debugging capabilities available on Lisp Machines were a real treat to operate. We can reasonably disagree about how useful it might be to have OS, toolchain, editor, and system software all written in the same language and presenting a uniform interface, but I don't see how anyone could claim that ML and Haskell have surpassed Lisp on this point.
Take away static typing from Haskell and you still have a richer language than Lisp, with more interesting research happening in the community. It is ironic that e.g. nested data parallelism has its roots in Lisp (as early as 1990) but might soon enter the mainstream through Haskell.
Somehow the Lisp programmers haven't gotten the message that Lisp has been surpassed in all respects by Haskell - I wonder why that is. Do you have an idea why Lisp has cool applications and Haskell has curses-based MP3 player frontends?
Because Common Lisp has been suitable for building complex applications for more than 20 years, while Haskell has been in that state for about 2? It follows that Lisp would have at least an order of magnitude more applications, and that's neglecting "network effects".
Also, Common Lisp still has no libraries or coherent community. Every time I ask #lisp about a library for something trivial, the answer is "write it yourself". This is annoying to people that don't want to shave that particular yak.
Haskell's community is nearly the opposite, so I bet in 18 years we will see many more Haskell apps than Common Lisp apps. Unless the community starts being nice to people, and people start sharing code worth sharing, that is.
The 'Common Lisp has no libraries' thing disqualifies you somehow. I think I have a couple millions lines of Lisp code on my laptop. ITA Software wrote now around 650kloc lines of code for their reservation system, additionally they are using 150kloc of public libraries.
Sure the Common Lisp community is not that 'coherent' - the language is used for very different things. Still there are meetings and conferences where users meet.
The first Haskell report appeared 1990. I'd say there has been lots of time to get applications written. Before Haskell there was Miranda.
Common Lisp has a lot of libraries. Look at CLIKI for some pointers.
So, did you write your Common Lisp library yourself? That's what I usually do when I miss some stuff or would like a different implementation.
Which libraries are these? How much internal tweaking was done to get them to install? Even the best CL libraries I know of (cxml comes to mind) have required lots of hacking (from me) to get them to work on my Linux + SBCL machine. That is the path of least resistance, as far as I know, and it is not very smooth. Even saying nothing about the underlying package repositories, asdf-install is certainly nothing like cpan or cabal-install. Counting the underlying package repositories... well... it's cute that CL tried... (I say this as a big Lisp fanboi, BTW.)
I went to ILC this year. It was nothing like any programming conference I had ever been to. Everyone droned on and on about their incomplete research project or some company they founded that happens to use Lisp. Someone who admitted to never programming in any language other than Lisp told us that it was clear that Lisp was the best programming language. Yeah, you sure convinced me...
Most conferences I go to are about programming techniques or actual working code you can immediately download and use. I think there were about 5 sessions like this at ILC. The rest were largely irrelevant.
The first Haskell report appeared 1990. I'd say there has been lots of time to get applications written.
No, Haskell was junk until about 2 or 3 years ago. Sorry. The runtimes were slow and buggy, there was no community of people writing software, and there were no libraries. There was no packaging system or library database, even. Expecting anyone to write a useful Haskell application would have been completely unreasonable. (But of course, there were some; darcs and ghc for example.)
The same is not true of Lisp. It has been pretty much the same for a very long time.
Look, Common Lisp and Haskell are among my favorite programming languages. But I am not surprised that neither have a wide following, either.
I was at the ECLM a few weeks ago. There was that guy who said he wrote a few million lines of C code himself (he was not the youngest, though). He was presenting a signal processing environment written in Lisp.
I found him convincing when he said that he prefers to code in Lisp.
Some days ago I've made a version of Axiom runnable under LispWorks (with a bit of help from the maintainer). Very complex software with a complex build process, ported to several Common Lisp systems. Took me a day to get it to work under LispWorks.
Lispers have used lots of powerful paradigms - and no, not everything is a list. If you look at the current Common Lisp development environments, everything is a CLOS object.
I think that this is an unfair criticism; you don't really supply any evidence. (Other than that Lispers are completely oblivious to anything but lisp).
ML and Haskell have certainly surpassed common lisp in some respects, but to say that they have surpassed all lisps in all respects has got to be some sort of sophistry.
Lisp is a form of syntax (and a few primitives like Eval and quote) that makes it easy to write programs that manipulate programs.
Common Lisp and Lisp 1.5 and Scheme and Clojure all have certain other things in common.
However, they have those things in common because they are generally recognized as being good things to have in a language for the purpose of programmer productivity. In no place is it written in stone that to be a Lisp, it must not use Hindley-Milner type inference.
I think you'll also find that most Lispers are chronic dabblers in programming languages, to say that I'm sticking my head in the sand when I choose to write whatever-it-is in Clojure is insulting.
I also feel that Lisp is a wrong direction. Its main problem is the lack of static typing. It is understandable that a language which has existed for so long with relatively little change would lack a static type system, but with as much progress as has been made in research on type systems since then, there is no good reason for a modern language to lack a powerful type system.
There is value in static checking for some things, but it is also limiting. It requires that code be written such that those things are declarative, can be understood without running the program. There is also value in having those things to be programmable, dynamic. This is a "good reason" even if you prefer the alternative. In Clojure, the lack of static type checking was not done out of laziness.
> there is no good reason for a modern language to lack a powerful type system.
Lisps can and do have powerful type systems - you're confusing dynamic with none at all.
> There is value in static checking for some things, but it is also limiting.
Other than type-safe heterogenous lists(which is only true if you hold static type systems to a much higher standard of type-safety than dynamic type systems), what sorts of things do a type system with type classes, algebraic data types, and higher order functions prevent doing?
> Lisps can and do have powerful type systems - you're confusing dynamic with none at all.
Common Lisp has a powerful object system and dispatch scheme, but it isn't as powerful as Haskell's type system. Type classes provide compile-time multi-methods which can be every bit as dynamic as CLOS's when needed, but which are statically checked as much as is possible. In addition, type classes make it possible to dispatch not just on argument type but also on return type. How do you write a Common Lisp function which polymorphicly converts an integer to whatever type you need? This is admittedly a bit of a weak example, but polymorphic conversions like this can probably be useful(for example, you could use it to implement a somewhat inefficient heterogenous list).
In Haskell(I'm not very familiar with Haskell, but my testing indicates that this would work):
class FromInt t where
fromInt :: Integer -> t
Haskell also has parameterized types, which aren't as needed in dynamic type systems, but that relative lack of need is only because a dynamic type systems considers a program that calls a method which doesn't work on an object's type at runtime to be type-safe.
In a statically typed language with type classes and sufficiently powerful syntactic macros, it would be possible to conveniently defer to dynamic type checking in way that would type-check statically. The macros are only necessary for the convenience part. Otherwise, you'd have to create a lot of type classes for every type you wanted to use dynamically. With macros, you could just include the creation of the appropriate classes in the form that defines a new type. In fact, it could be built into the language making even macros unnecessary.
>Other than type-safe heterogenous lists(which is only true if you hold static type systems to a much higher standard of type-safety than dynamic type systems), what sorts of things do a type system with type classes, algebraic data types, and higher order functions prevent doing?
Automatic serializing and deserializing of objects. I wrote the RJson library for Haskell to do this for JSON, but there is no way I would ever write anything like that in Haskell again. Using a statically typed language makes it necessary to use horribly complicated types and (in the case of RJson at least) to lose the guarantee that the program's types are decidable. Also, you tend to find that as the type system grows more sophisticated than basic Haskell 98, there is a much greater likelihood of incorrect programs type-checking correctly, so this becomes a less useful means of preventing errors. In writing RJson, I frequently had to determine the correct types for functions by trial and error -- it kinda defeats the purpose of strong typing when a function's type is harder to understand than the function itself.
In a language like Python or Lisp, with decent reflective capabilities, writing the equivalent library would be a whole ton easier.
CLOS is runtime dynamic. Haskell is not. Haskell is a mostly static language. Common Lisp is a mostly dynamic language and provides all kinds of mechanism to make changes at runtime to the language and programs written in Lisp. The CLOS Meta-Object Protocol allows various ways to change the object system. This has been used for various object-oriented languages on top of Lisp, user interfaces, databases and other stuff.
The 'dynamic' part of Common Lisp means that is a runtime programmable programming language.
Haskell and Common Lisp serve totally different purposes. Common Lisp is there for incremental, interactive development of complex software - software that is always running in the development process and gets modified until is does something useful. Software that also can be modified when deployed, while running.
> How do you write a Common Lisp function which polymorphicly converts an integer to whatever type you need?
How does the callee know what the caller needs?
Suppose that one callee produces an int (as above), the first caller wants a string (decimal digits in ascii) and the second wants a byte array (little endian). Do you really think that the caller (or compiler) should be responsible for accomodating those conversions? Now let's add a caller that wants the index of the "high bit" - how are you going to accomodate that? (Of course, other callers want an int - I'm just mentioning the callers that require some sort of conversion.)
Neither the callee nor the compiler/system can automatically provide anything beyond the most trivial of conversions. That's why the right solution to the third caller, either an in-line shim or a wrapper, is the right solution to either the first or second caller.
> With macros, you could just include the creation of the appropriate classes in the form that defines a new type. In fact, it could be built into the language making even macros unnecessary.
Lisp macros can do many other things. (Consider the loop construct.)
If Go people succeeded with taking over some C area, that would be the direction right enough, don't you think? Simply because Go seems a bit better than C.
I'd love to see some modern SML or Haskell taking over the area of code compiled to binary, plus a constant growth of dynamic languages as well, because they are also cool. But that big revolution may not happen.
Well, Lisp sure does exist, and has been used to solve real world problems. But those were predominatly in applications where performance, parallelism, graphics, large matrix computations and GUIs do not matter. If you want to pick on the imperative languages (i.e. Pascal/Fortran/C/C++), you better show you can do what they can do.
Nah. Using Lisp to script an engine written in C++ is not the same as writing the engine itself in Lisp. I'm not saying this is not possible, it's just that nobody tries.
You might be able to do a serious graphics or high-performance computing project in Lisp. You will be the first one, and I'll be very interested in how it goes.
ICAD has been written in Lisp and many turbines of several commercial aircrafts have been designed with it.
CDRS was the conceptual design and rendering system, written by Evans & Sutherland in Lisp. Many cars from Ford and Jaguar have been designed with it. It was then taken over by PTC and sold as Pro/Engineer Designer.
Mirai is a 3d graphics tool used for example to create the animations of Gollums's face in Lord of the Rings. Earlier versions of this software have been used for animation films and many computer games, for example by Nintendo to design the 3d worlds. There were other uses of this software, for example the Orca in 'Free Willy' was animated with it.
Let us know when there is a similar Haskell app that supports 2d+3d painting, 3d modelling, 3d animation, 3d rendering, 3d motion editing, etc. in a nice application for, say, SGIs or Windows machines - then show how to script it in Haskell at runtime.
Oh, Haskell is also not used very often for those things. It's just my (outsider) perception that Lisp programmers frequently pick on C/C++ as inferior. Haskell programmers instead ask "what can we do to make Haskell performance closer to C/C++?"
However, I am doing graphics for a living, and much of it in C++. I am sincerely interested if I can use another language to give me a higher degree of productivity and not sacrifice too much performance. Common Lisp might be able to pull this off, but it certainly is not often used in this way.
I think you'd agree it's more common for a Lisp person to ask "why do people still use these inferior languages?" rather than "how can we match the performance/scalability/parallelism of these other languages?".
C/C++ are inferior. C and C++ are very bad as a dynamic and interactive languages. C makes it hard to write secure programs and C++ is just horrible in many ways.
But C/C++ are superior as static low-level languages compared to Lisp.
The Haskell programmers should really ask themselves how to get closer to C/C++ in speed - especially since the Haskell language implementations (GHC!) are mostly static and code is usually statically compiled - just like C and C++. If Haskell compilers generate slower code or user code is slow, Haskell users have little excuse, only poorer tools and/or poorer code.
Lisp programmers do have an excuse, since the tools are optimized for safe execution of untyped dynamic (changeable at runtime) code. That's a completely different angle of programming. From there the Lisp compilers try to recover some speed by selectively removing dynamics and safety - where possible or needed. Lisp programmers ask themselves how to improve the compilers, but since both the application domains and the architectures tend to be different, things are hard to compare. C/C++ wins in the static performance contest. Lisp wins in the runtime flexibility contest. But those are really very different domains.
You think that Common Lisp programmers think that C is inferior. The reality is that these are completely different languages developed for different tasks with very different implementation and design decisions.
If you are a Lisp programmer, you can write graphics intensive applications in Lisp - you can also get near to C performance - but it's hard and it often is not full C performance (unless you are using specialized compilers). But still you would be very lonely (unless the company around you uses Lisp), since much of the industry uses C/C++ for graphics (from Low-level drivers to Maya).
Still many Lisp programmers, despite knowing that Lisp and C are different, try to use Lisp either alone or together with C in performance oriented domains: sound processing and image processing were and still are such domains.
The Haskell programmers should really ask themselves how to get closer to C/C++ in speed - especially since the Haskell language implementations (GHC!) are mostly static and code is usually statically compiled - just like C and C++. If Haskell compilers generate slower code or user code is slow, Haskell users have little excuse, only poorer tools and/or poorer code.
Obviously Haskell could improve, but can it be as fast as C? How do you shave the last bit of overhead of type safety and strictness?
I thought type safety in Haskell is a compile time thing? Where are the runtime costs?
Then I thought that non-strictness (!), non-mutability of data and referential transparency offer all kinds of possibilities for optimizations done by the compiler?
But maybe the 'interesting research' done in the Haskell community is more into fancy type system features that get the author a PhD and not so much into compiler optimizations?
Lisp "printing" is used for two different purposes: 1) to display results for a human reader/viewer. and 2) to serialize Lisp forms for later READing.
Instead of print, try princ.
[1]> (princ "hello world")
hello world
"hello world"
The first unquoted string is the rendered display of the PRINT function, a side-effect. The second quoted string is the result, print returns the string it printed as a result.
Try nesting them and see what happens: (princ (princ "hello world"))
You admit to having an anti-lisp bias without knowing anything about lisp. This whole "I hate it because I don't know it" thing is silly and unbecoming of good programmers.
Yes, if Lisp is so powerful, where are all the super programs written in it? I'm only aware of EMACs and maybe at one point Autocad? And I guess PG wrote a couple webapps with it....
That's a logical fallacy. Something can be both great and unknown to you. Usually, you seek information about things you know you need/want, and can afford. If a topic does not interest you, you will forever remain oblivious to it, except maybe for some tangential news about it that reach you by accident.
It's naive to think that just because you're a programmer you know everything there is to know about the software industry. You don't. You know about a small fraction of the mainstream development tools, and even those, at a superficial level. For example, can you name the top software products for running local elections? how about automating a chemistry research lab? packages for oil exploration? vacation property management? real-time spoken Chinese processing? computational archeology? artist talent management? threat modeling for distributed corporate networks?
See? :-)
"Social" programmers, the guys you see in chat rooms and "community" websites are all mostly generalists, and typically work in well defined roles as grunt programmers. Nothing fancy, website here, a database there, a GUI or two, click, generate, zip, distribute, and call it a day type work. Don't let this closed echo-system define your perception of "the software industry".
There should at least be one canonical example. I'm not claiming to know everything, I'm trying to fill in my knowledge. If lisp is so great, and so much more productive, and used by the greatest programmers on Earth, point me to some great code so I can become more informed.
Should I start reading EMACS source? Show me the code! If there's not at least one great open source project written in lisp, I'm not inclined to take your claims prima facie.
I don't know about super-programs, but I use Clojure at work (doing things no one else would ever care about) and I use it at home to solve fun problems. I previously did the same things in PHP and Perl and Ruby and Python and such, and the Clojure versions are easier to write and maintain, easier to use, less code, more fun etc. Same for Common Lisp, generally.
I wouldn't extrapolate beyond that. I don't know what languages are good for writing super-programs since I never wrote one. Super-programs are a small subset of all programs.
Well, there's the stuff from ITA Software that powers Orbitz. Flightcaster is using Clojure. Impact Solutions in Houston uses Lisp for real time analysis of oil rig data.
I think the standard argument would be that the average programmers are using average languages, so popularity is a horrible measuring stick for powerful languages.
OTOH the Go authors made it clear that it's intended to replace C for the basic system programs & libs. I think that's step in the right direction.
I started to earn money using Erlang & Clojure, so looks like I'm not only fan of new exciting high-level langs but also quite an adopter of them in the so-called real world. Still there are some uses when a VM of your Chosen One language brings an unacceptable overhead. Imagine your daily work using vi, grep, find as they were compiled into jars and each running via JVM. Nightmare with that 2 seconds delay of starting each little proggy, isn't it? (Have you ever used the original Amazon's EC2 commandline tools written in Java? You'd know what I mean).
Of course it could be solved by some kind of injecting of program to be run to some VM already running, but that's not an established practice of working with basic programs in your system. At some level fast binaries are needed.
Personally I think it's a pity that some language of SML family didn't get to the mainsteam for such system programming. Haskell is fun and people write things like text editors and window managers in it; but is still perceived as too academic for real tasks which isn't completely true.
Experimental evidence suggests that people want to use Visual Basic and Windows, actually. Although the language stats may have changed since the last time I looked at them, I'm reasonably certain at least Windows is still on top. Make of that what you will.
Edit: For what it's worth, I have some sympathy for Microsoft's position. I don't doubt that if they could just break away from backward compatibility they could make something far less kludgy than what they have; unfortunately, if they did break away from it, they'd lose an awful lot of money. Alas.
Erm, do people want to use VB for programming libraries and core system tools? (I thought we're talking about such case, as Go authors clearly stated that's their target).
Go competes rather with C/C++ than with VB or any Lisp.
C'mon guys, you don't have to bash any language or solution only because somebody wants to use it for some reason (which may be different than yours).
Having Go in a toolchain traditionally reserved to gcc will only make things better and will NOT threaten a position of your shiny new high level language.
I wish Go people success. They are smart hackers from the ol' days.
As a common lisp programmer it's sad to see somebody do to another language what people do to CL. One superficial glance at the syntax and write it off, without further criticism.
After living with a nice s-exp language, C-like syntax seems like it brings a lot of problems with very few benefits. However I wouldn't dismiss a language based only on the syntax.
It seems you (as well as several other people) can only imagine a future with Haskell or Lisp syntax as if anything else is inferior. I find that to be foolish and immature.
Go was specifically designed to be a systems language, period. Lisp was never designed as so, and perhaps that is why its hardly used as a systems language today. Personally, I like Lisp, Haskell, and Python but I also know C is a great systems language even though its not perfect and its a bit passed its prime. Thus Go seems to be a good progression from the C/C++ family (I never really saw Java being a systems language but that's my opinion).
Anyway, I agree I think you're becoming a bit of a lisp snob. It might suit you to be a bit more open minded and objective. Even if you don't fully understand the design of Go, you should be able to understand that it was designed by people who have credible experience and skill in designing languages and systems. And sorry to spoil things for you, but the last thing I can imagine is Clojure being the next systems language.
Your post essentially is a brief comparison of a selection of fruits without much substance.
Lastly, here's a piece of advice:
"That's why I reserve the right to match the language to the problem, and even - often - to coordinate software written in several languages towards solving a single problem."
Snob or not, what increasingly pulls me to languages are the highly productive DSLs that seem to fall out of them. For example, I need to deliver a few lightweight web apps/services - and I stumble across Sinatra on Ruby (with a little Sequel thrown in mixed with the oci8 oracle gem), which seems to do all I need. My little ruby code snippets look just like that - snippets that are short and expressive of the what I'm trying to do.
Lisp seems to be the "godfather" of these things and I'm more and more interested in what makes me productive, without being surrounded by a team of enterprise programmers and all the boilerplate that seems to fall out of other languages.
I make my living hacking SQL. What I'm really looking for there is the ability to create a lambda on the fly for custom grouping (eg implementing a business rule dynamically). I've worked on at least three big enterprise projects where I could have eliminated thousands, if not tens of thousands of LOC with a feature like that. Maybe I should punt and roll with a Prolog engine in the Lisp of choice. Who knows.
I really enjoy the articles and comments on these topics here on HN. Keep 'em coming.
If you're truly passionate about hacking it would be hard to avoid becoming a LISP snob at some point in your development. But if you're pragmatic, you'd be able to transfer what you've learned from LISP to other environments, that may be better suitable for today's technological landscape. Is Google Go the way to go? Only time will tell. But there's little you can usefully do with LISP these days (unless the scope of the program you're working on is so restricted that you really only need the language purely).
If you get into compilers and language design, all languages turn into more formalized versions of Lisp mixed with some libraries and macros. Lisp in many ways is little more than a serialization format for a syntax tree, with a DSL for converting that tree into an executable program.
That's both its biggest advantage and disadvantage. It's an advantage in that it can be stretched and flexed to cover any problem domain, by rigging up a language suited to the domain. On the other hand, the fact that it's so flexible means there's less common points of reference for composition in the large. Learning a language for a domain is usually more work than learning a library that uses familiar idioms, even if it is more flexible in the long term. If that domain isn't your bread and butter, a library suits you better.
It comes down to an economic argument, I believe. Some level of centralization is necessary to overcome the transaction costs of fitting lots of little languages together, which each have their own ideas about how things should work. It helps hugely in tooling, debugging, analysis, etc. Syntactic transformation via macros from a high-level language to a lower level isn't enough to create a practical modern language in the large. You need more.
As to the practical applicability of Lisp, I think you can get most of what you need with something like Clojure.
Even pg admits that Hacker News is a technology failure. HN is good because of the community, not the software. The software in fact seems to be a hacked together half-assed job.
No, that's not true. The software works quite well, especially considering how few lines of code it is. (To be able to say that was one of the goals for both app and language.)
What I said was that it's not user features that keep people here. But there is more to the software than user features. A good example is the code that protects against various kinds of abuse, like spam, trolls, and voting rings. By LOC that is a large percentage of the total, and HN as a community would be long since dead without it.
Lisp programmers already tend to use the indentation the way adopted by Python. Add more rules for indentation and newlining, and voila: all or almost all parentheses become superfluous.
Brackets make macros easy, because they make the abstract syntax tree blatant. Make it less blatant, and macros will become much more difficult to write. Given the options of i) macros are easy to write, and ii) Python-style indentation rules enforced by the interpreter/compiler, I'll take (i).
I'm telling you that commercially successful hi-tech things in general, and programming languages in particular, should be easy to use for average prople. Java and especially PHP (which is a total mess which was never designed) are the proof of concept. Note that both of them are using C-like syntax.
The biggest prof is, of course, the mobile phones market. Usability and simplicity (along with simple visual effects) sell. And interface practices and some widgets were adopted on the web.
And yes, Java (and Windows) was designed to use low-skilled labor. There is nothing wrong with it.
"And yes, Java (and Windows) was designed to use low-skilled labor."
This is an assertion I've seen repeated many times, but never with anything to back it up. Do have any evidence for this, or is it just a feeling that you have? Based on what I've read, Java was designed for use in embedded systems like set top boxes, and the authors wanted to design away some common developer errors. An evolutionary biologist might say that Java was "pre-adapted" for use by low-skilled labor, but I don't think there is any indication that it was designed for it.
the authors wanted to design away some common developer errors, which average developer cannot overcome.
There is no rocket science in memory management and pointer manipulation, but, from commercial (manager's) point of view, those difficult to find and debug memory issues is the common cause of troubles with schedule and budget, because good programmers are rare, expensive and difficult to deal with, while average code monkeys are cheap in the first place, and easy to hire and replace.
That's why Java is the de-facto standard for corporate in-house development (read - coding fabrics) and no one in that world even considers that stuff like the ability (in theory) to run the same code on a different platform, especially while it is impossible in so-called objective reality. (Just try to run some bloated, poorly designed spring-hibernate-with-dependences project on a platform other than x86).
And finally, consider RoR - same approach, same and big success.
For example, turbo pascal was written during the time that I was writing a pascal compiler. The Pascal standard was truly a harsh mistress, and turbo pascal made a lot of very useful engineering choices to make a fast, very useful compiler in a small environment. Anders went on to do C#, which itself was a good engineering feat.
I watch with interest Clojure and the ecocsystem that it plugs into and it has a REPL.
But it is a pragmatic fact of life that C is a very large force in the day-to-day world, along with a tool used by serious hackers (c.f. Coders At Work).
Python is very nicely designed and gets you to a higher level of programming with ease.
So I was curious when Go came along, as a language development observer, to see what were they coming up with.
It is clearly a language that has had a lot of thought put into it and it seems to be something that will challenge C on its own turf ultimately. Faster than Python, and possibly ultimately approaching C in speed, it has some useful new ideas. In particular, how it does goroutines and channels is quite refreshing. It addresses the admonition that "threads won't work if they are just libraries" with new language constructs.
So criticism of language ought to be tempered by some experience with the language. Having done enough of them, it seems that I often don't have a full appreciation for a language until I have done something significant in that language. So I feel comfortable offering severe criticism of languages like RPG-III and Fortran II and Altair Basic and Bliss-36.
I am probably to the point of being a lisp snob by now, but do appreciate the finer points of other languages.
I think Go is a step in the right direction for the problem that it wants to solve.