I believe every new LISP author should begin their presentation with how does their language compare to other LISPs (CommonLisp, Racket and Clojure in particular). Mentioning it's a LISP-1 in the beginning is nice but not really enough.
I agree, though they did put some of the first-questions-that-come-to-mind on the second page, "https://www.ale-lang.org/intro/":
> What is Ale?
> Ale is a Lisp-1. In this way it is more like Scheme than like Common Lisp. Unlike many Lisps, Ale does not support both interpreted and compiled operation. Instead, when the programmer performs an (eval), Ale immediately compiles the form and invokes its virtual machine. And unlike nearly all Lisps, bindings in Ale are immutable, even in namespaces.
> Ale borrows from Clojure where syntax is concerned, but it diverges in many ways. It is designed to be hosted in a Go process. It is also designed to allow multiple hosted instances within that process, each one being completely isolated from the other.
Most new Lisp implementations are hobby/educational, and/or driven by a particular corner of implementation or language design (e.g., target the JVM or some other language backend, fit on a microcontroller, support a million threads, support STM or some unusual evaluation model, new terse syntax).
Would be nice if new Lisps that were focused only on implementation (rather than language design) used R7RS Scheme or CL as their language design. Then they'd have a better chance of uptake, compared to a new "generic Lisp".
Excellent. You might also like R5RS or R7RS, plus `syntax-case` (though `syntax-rules` is very nice, when the problem fits it). For work that builds upon Scheme's syntax extension mechanisms, see Racket for ideas, including its syntax objects, very powerful `syntax-parse`, a very simple template-based syntax transformer, and the `#lang` feature.
These languages are all looking like crafted , so that they run no existing code. If the language is 'more' like Scheme, as the author says, there is enough difference built in, so that no Scheme program runs.
Imagine being directly able to use one of the countless Scheme books/libraries/... to experiment with it. To work with something like SICP one really needs only a tiny Scheme...
I believe it has to do with namespaces for naming functions vs variables. A lisp-1 has one namespace that encompasses variables and functions. In a lisp-2 they are separate and so it is possible to have a variable with the same name as a function.
The nice thing in Lisp-2 is that Lisp code is not only made out of function calls, but also special operators.
In a Lisp-1, simple variable bindings can shadow special operators. If you name a variable let, you're locked out of further let binding in that scope. That is ugly. Yet, you can hardly prevent let being used as a variable name. In a Lisp-2, we don't have this problem. We have a more benign version of it, rather: what happens if someone names a function let. That can be dealt with by a compiler warning or error. E.g. our implementation can warn that a special operator is redefined. And then it can cheerfully ignore the redefinition, so that let continues to work.
Then there is the question, in a Lisp-1, given that we have a let operator, and operators and variables are in the same space, what the heck is the value of let as a variable? What should (let ((let let)) let) return? This is hand-waved away with something lame like let having an undefined value or whatever.
This is ugly and points at Lisp-1 not being as clean and consistent as it is cracked up to be.
With macros, it's possible for the same symbol to have both a function and macro binding! In a Lisp-2, that's like 2.5 namespaces.
This is the TXR Lisp interactive listener of TXR 219.
Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
1> (defun foo (arg) (* 10 arg))
foo
2> (defmacro foo (arg) ^'(multiply ,arg by 10))
foo
3> (foo 15)
(multiply 15 by 10)
4> (mapcar 'foo '(1 2 3))
(10 20 30)
5> (fboundp 'foo)
t
6> (mboundp 'foo)
t
7> (mmakunbound 'foo)
foo
8> (mboundp 'foo)
nil
9> (foo 10)
100
It is said glibly that Lisp-1 dialects uniformly evaluate all positions of a form, rather than treating the leftmost one specially. But that is actually not true. A Lisp-1 dialect, just like Lisp-2, looks at the first position and treats the expression specially if there is a special operator or macro there! (But: not if it's a symbol macro; yet, symbol macros and macros end up in the same namespace.)
Moreover, after a function call form like (f x y) is evaluated (uniformly, to be sure), the semantics isn't uniform: the first value denotes a function (or other callable object), and the remaining values denote arguments to be applied to it. That's a fundamental asymmetry, shared with Lisp-2. Why fret about symmetry in the evaluation, when the semantics of the application itself is ultimately asymmetric.
In the end it just boils down to the pragmatics: hygiene considerations versus not having to use funcall for working with higher order functions.
Common Lisp is a Lisp-2; Scheme is a Lisp-1. Lisp-2 is handier [to me] in many ways but it requires an inelegant syntax when you want to evaluate a function that returns a function and then call said returned function.
Assume you have a function foo that when called returns the function '+.
To then call that function with args 1 & 2 you'd say in a Lisp-1:
foo()
{
echo hi
}
$ foo=1
$ foo
hi
$ echo $foo
1
$ var=$(foo)
$ echo $var
hi
$ var=$foo
echo $var
1
Separate space for variables and functions/commands. Hardly anyone bats an eyelash about this.
This provides a hygiene benefit similar to what we have in a Lisp-2. Specifically, suppose we do this:
$ ls=abc
by doing that, we have not shadowed the ls command. We can name shell variables without worrying about that sort of clash. This is so obvious, you don't see it being spelled out to anyone.
Exactly this. When I write Scheme code, it trips me up when I have to re-spell variable names to avoid clashes. Experienced Scheme programmers don't find this difficult; they think it's weird to have to use funcall. It's just a function of what you're more used to.
It happens in C, when you have short function names. In the C internals of TXR, I have to be careful about using variable names like list or cons, because then those functions are not available.
A certain list accumulating macro calls the tail function. If I accidentally introduce a variable called tail where this macro is used, oops!
In typical C code, you're protected from clashes by using short variable names, and long function names.
If you're doing anything involving externally defined/named data and analysis, working with it interactively/exploratory, I find it infinitely better, just because you can BOTH represent data using the same name that external sources are using AND not clobber/shadow any functions, macros or code currently in scope.
Its a little thing, but if I designed a language for data munging/science/analysis, it would be a lisp-2...
/and yes, i'm well aware that python and julia are not...
I come from a scheme tradition where higher order functions are used for many things. I often use the name lst for generic lists.
I am not so sure I think it matters. In the place where it matters the most (macros) scheme has macro hygiene. That discussion is not worth having here though, because it is really just a matter of taste.
I like both ways, but prefer the scheme way where you have to explicitly break hygiene instead of explicitly try to not shadow bindings . I understand that defmacro can be nice sometimes, and for that most schemes provide it (heck, even racket has it!).
Now that I think of it, I certainly have tripped on "list" arguments in Python, but adopted haskellers' "xs" for such generic uses and forgot about it. BTW, GHC throws especially funny messages when you define a variable like "head" or "last" and then try to call the shadowed function. OTOH, Emacs' python-mode highlights builtins regardless of context, very convenient to catch such lapses immediately.
I think this has a lot of potential if it's able to call Go functions, like Clojure can call Java functions. The language would have a healthy ecosystem of libraries to boot.
It's weird to me that there aren't more compile-to-Go languages. The ecosystem of Go packages is great, and the runtime is undergoing continuous refinement by a team of really smart people.
The main feature Go was missing that took away most of the usefulness of writing a Lisp in it was FFI. If you can't dynamically get identifiers (constants, functions, variables) by a string at runtime, you lose most of the interesting desktop possibilities. It's probably still useful as a server-side language but Lisps seem the most useful in the GUI space.
Also it says this in the author's latest blog post:
> "Yes, there are a lot of parens, but that is a small price to pay for a language that is capable of morphing itself into nearly anything you need it to be. And the way that Lisp does that is with a macro system that puts #define to shame!"
I'm convinced this is not the real benefit of a Lisp. The only thing macros can do that functions can't is hide control flow. And 99% of the time that's a bad thing that library authors should not use. It could be used to make other language features such as concurrency, but that's a slippery slope.
The real benefit of macros in my opinion is a balanced syntax that makes things like Emacs + paredit possible, which increases your productivity like 5x-10x. It's on another level, definitely worth trying for a few months if you work with a Lisp already. But I don't see any other real value in Lisp's syntax or in macros.
Nah man,macros are the bee's knees! Being able to add new constructs to a language is neat. My favourite examples are match.scm - a pattern matcher that produces zero overhead code - and racket's for loops, which are just regular macros.
Being able to add features like that is amazing. Clojure'sarrow syntax? No problem. Python's list comprehensions? Can be done! A library for Pattern matching? Many to chose from!
Pattern matchers that were added to other languages as libraries are often weird to use. Match.scm has the same syntax as cond or case.
Lisp always sounds cool to me until someone starts talking about macros. What's it like inheriting a lisp code base, finding out its full of homemade macros like list comprehensions, and that they're not documented? Inheriting a code base is always scary, but I feel like it kicks it up a notch when your predecessors can customize the language itself. Are my fears justified?
> What's it like inheriting a lisp code base, finding out its full of homemade macros like list comprehensions, and that they're not documented?
If such a macro is used in more than two places, it's generally a relief that the author did that instead of doing copy and paste by hand.
The author had some coding idea, and formalized it into a little robot that writes that idea, which has a name.
Even if that thing isn't accompanied by documentation, it can serve as a kind of documentation to what it's doing.
Code is going to be full of homemade functions that are not documented; whether they are functions that write code at compile time, or whether they are run-time functions, is kind of a minor concern.
Macros look the same as functions in lisp, the only difference is that you can't pass them as function arguments. Unless there is a bug in the macro you just spend the same time understanding the macro that you would understanding a function.
Most macros are just simple syntactic abstractions that expands simple code to more lines of relatively simple code to avoid boilerplate. Very rarely do people write complex beasts without documentation.
I forgot to mention that loop requires a lot of mutation, which sort of sucks in scheme world where you can use multi-shot continuations and ruin the fun. There are loop implementations for racket (which produce decent code and all that), but they are rarely used.